pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
sequencelengths 0
201
| languages
sequencelengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
sequencelengths 0
722
| processed_texts
sequencelengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_llama3_8b - bnb 4bits
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_llama3_8b/
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
---
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
**ОСТОРОЖНО! WARNING! LET OP!**
I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats!
**v4**: LLama-3 prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|>
Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|>
Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**v2, v3**: ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Как дела?<|im_end|>
<|im_start|>assistant
Отлично, а у тебя?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры.
Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза.
Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи.
В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью.
Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе.
Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю.
Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону.
Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения.
```
v4:
- [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt)
v3:
- [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal)
v2:
- [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b)
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 |
|saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 |
|saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 |
|saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
| {} | RichardErkhov/IlyaGusev_-_saiga_llama3_8b-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T19:07:43+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
saiga\_llama3\_8b - bnb 4bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* ru
datasets:
* IlyaGusev/saiga\_scored
license: other
license\_name: llama3
license\_link: URL
---
Saiga/Llama3 8B, Russian Llama-3-based chatbot
==============================================
Based on Llama-3 8B Instruct.
URL version: link
ОСТОРОЖНО! WARNING! LET OP!
I've changed the prompt format from ChatML to the original Llama-3 format in v4. Don't forget to switch formats!
v4: LLama-3 prompt format:
v2, v3: ChatML prompt format:
Examples:
v4:
* 1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd
* dataset: saiga\_scored, scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
* wandb link
v3:
* c588356cd60bdee54d52c2dd5a2445acca8aa5c3
* dataset: saiga\_scored, scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
* wandb link
v2:
* ae61b4f9b34fac9856d361ea78c66284a00e4f0b
* dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
* wandb link
* 5 datasets: ru\_turbo\_saiga, ru\_sharegpt\_cleaned, oasst1\_ru\_main\_branch, gpt\_roleplay\_realm, ru\_instruct\_gpt4
* Datasets merging script: create\_short\_chat\_set.py
Evaluation
==========
* Dataset: URL
* Framework: URL
* Evaluator: alpaca\_eval\_cot\_gpt4\_turbo\_fn
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_gemma_9b - bnb 4bits
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_gemma_9b/
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on [Gemma 7B](https://huggingface.co/google/gemma-7b).
Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh).
ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Привет!<|im_end|>
<|im_start|>assistant
Отлично, а ты?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием в её клетках пигмента хлорофилла, который играет важную роль в процессе фотосинтеза – преобразования солнечного света в энергию, используемую для роста и развития растений.
Хлорофилл поглощает свет в определенных диапазонах длин волн, в том числе в зеленой области спектра, и отражает свет в других диапазонах, что и обуславливает зелёный цвет.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды, в далекой стране, жила маленькая девочка по имени Таня. Она была очень активной и любила играть в мяч. Вместе со своими друзьями они проводили дни, играя в футбол, баскетбол и волейбол.
Таня была очень талантливой и ловкой, и она часто выигрывала в играх с другими детьми. Но однажды, во время игры в баскетбол, она получила сильную травму. Ей пришлось провести несколько недель в больнице, и она была очень грустной и унылой.
Её друзья не могли понять, как помочь Тане справиться с этой ситуацией. Они пытались поддерживать её, но Таня была слишком подавлена и не хотела играть в мяч.
Однажды, Таня увидела, как её друзья играют в мяч. Она не могла не смотреть, как они ловко передают мяч друг другу и забивают голы. Таня почувствовала, как её сердце разрывается от желания играть вместе с ними.
Она решила, что ей нужно вернуться в игру, чтобы снова почувствовать радость игры. Она начала медленно восстанавливаться и постепенно возвращалась к своему любимому занятию - игре в мяч.
Таня стала играть в мяч снова, но теперь она была более осторожна и внимательна к своему здоровью. Её друзья были очень рады видеть, как Таня возвращается к своей привычной жизни.
С тех пор, Таня стала настоящей звездой на поле. Она стала лучшей игроком в своей команде и помогла им выиграть множество игр. Её друзья были горды, что Таня вернулась к своему любимому занятию и снова стала такой же активной и ловкой, как раньше.
Таня и её друзья продолжали играть в мяч и проводили чудесные дни вместе. Они были счастливы, что Таня вернулась к своему любимому занятию и снова стала частью команды.
Таня и её друзья продолжают играть в мяч и вместе они создают прекрасные воспоминания.
```
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
| {} | RichardErkhov/IlyaGusev_-_saiga_gemma_9b-4bits | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T19:08:06+00:00 | [] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
saiga_gemma_9b - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on Gemma 7B.
Training Colab: link.
ChatML prompt format:
Examples:
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb link
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: create_short_chat_set.py
| [
"# Saiga/Gemma 9B, Russian Gemma-based chatbot\n\nBased on Gemma 7B.\n\nTraining Colab: link.\n\nChatML prompt format:\n\n\nExamples:\n\n\n\n\nv1:\n- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a\n- wandb link\n- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4\n- Datasets merging script: create_short_chat_set.py"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Saiga/Gemma 9B, Russian Gemma-based chatbot\n\nBased on Gemma 7B.\n\nTraining Colab: link.\n\nChatML prompt format:\n\n\nExamples:\n\n\n\n\nv1:\n- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a\n- wandb link\n- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4\n- Datasets merging script: create_short_chat_set.py"
] |
image-text-to-text | null |
A [llamafile](https://github.com/Mozilla-Ocho/llamafile) generated for [moondream2](https://huggingface.co/vikhyatk/moondream2)
Big thanks to [@jartine](https://huggingface.co/jartine) and [@vikhyat](https://huggingface.co/vikhyatk/moondream2) for their respective works on llamafile and moondream
## How to Run (on macos and linux)
1. Download moondream2.llamafile
2. `chmod +x moondream2.llamafile` - make it executable
3. `./moondream2.llamafile` - run the llama.cpp server
## Versions
1. [Q5_M](https://huggingface.co/cjpais/moondream2-llamafile/resolve/main/moondream2-q5_k.llamafile?download=true)
2. [Q8_0](https://huggingface.co/cjpais/moondream2-llamafile/resolve/main/moondream2-q8.llamafile?download=true)
From my short testing the Q8 is noticeably better.
# ORIGINAL MODEL CARD
moondream2 is a small vision language model designed to run efficiently on edge devices. Check out the [GitHub repository](https://github.com/vikhyat/moondream) for details, or try it out on the [Hugging Face Space](https://huggingface.co/spaces/vikhyatk/moondream2)!
**Benchmarks**
| Release | VQAv2 | GQA | TextVQA | TallyQA (simple) | TallyQA (full) |
| --- | --- | --- | --- | --- | --- |
| 2024-03-04 | 74.2 | 58.5 | 36.4 | - | - |
| 2024-03-06 | 75.4 | 59.8 | 43.1 | 79.5 | 73.2 |
| 2024-03-13 | 76.8 | 60.6 | 46.4 | 79.6 | 73.3 |
| **2024-04-02** (latest) | 77.7 | 61.7 | 49.7 | 80.1 | 74.2 |
**Usage**
```bash
pip install transformers einops
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model_id = "vikhyatk/moondream2"
revision = "2024-04-02"
model = AutoModelForCausalLM.from_pretrained(
model_id, trust_remote_code=True, revision=revision
)
tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
image = Image.open('<IMAGE_PATH>')
enc_image = model.encode_image(image)
print(model.answer_question(enc_image, "Describe this image.", tokenizer))
```
The model is updated regularly, so we recommend pinning the model version to a
specific release as shown above. | {"license": "apache-2.0", "tags": ["llamafile"], "quantized_by": "cjpais", "base_model": "vikhyatk/moondream2", "pipeline_tag": "image-text-to-text"} | cjpais/moondream2-llamafile | null | [
"llamafile",
"image-text-to-text",
"base_model:vikhyatk/moondream2",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T19:08:17+00:00 | [] | [] | TAGS
#llamafile #image-text-to-text #base_model-vikhyatk/moondream2 #license-apache-2.0 #region-us
| A llamafile generated for moondream2
Big thanks to @jartine and @vikhyat for their respective works on llamafile and moondream
How to Run (on macos and linux)
-------------------------------
1. Download moondream2.llamafile
2. 'chmod +x moondream2.llamafile' - make it executable
3. './moondream2.llamafile' - run the URL server
Versions
--------
1. Q5\_M
2. Q8\_0
From my short testing the Q8 is noticeably better.
ORIGINAL MODEL CARD
===================
moondream2 is a small vision language model designed to run efficiently on edge devices. Check out the GitHub repository for details, or try it out on the Hugging Face Space!
Benchmarks
Usage
The model is updated regularly, so we recommend pinning the model version to a
specific release as shown above.
| [] | [
"TAGS\n#llamafile #image-text-to-text #base_model-vikhyatk/moondream2 #license-apache-2.0 #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Starling-LM-7B-beta - bnb 4bits
- Model creator: https://huggingface.co/Nexusflow/
- Original model: https://huggingface.co/Nexusflow/Starling-LM-7B-beta/
Original model description:
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
---
# Starling-LM-7B-beta
<!-- Provide a quick summary of what the model is/does. -->
- **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- **Model type:** Language Model finetuned with RLHF / RLAIF
- **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
## License
The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
## Citation
```
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
month = {November},
year = {2023}
}
```
| {} | RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-4bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1909.08593",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T19:08:24+00:00 | [
"1909.08593"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1909.08593 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Starling-LM-7B-beta - bnb 4bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
---
# Starling-LM-7B-beta
- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- Model type: Language Model finetuned with RLHF / RLAIF
- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).
Harnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.
Our model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.
In addition, our model is hosted on LMSYS Chatbot Arena for free test.
The conversation template is the same as Openchat-3.5-0106:
## Code Examples
## License
The dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
| [
"# Starling-LM-7B-beta\n\n\n\n- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).\n- Model type: Language Model finetuned with RLHF / RLAIF\n- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI\n- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)\n \n\n\nWe introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).\nHarnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:",
"## Code Examples",
"## License\nThe dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.",
"## Acknowledgment\nWe would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1909.08593 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Starling-LM-7B-beta\n\n\n\n- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).\n- Model type: Language Model finetuned with RLHF / RLAIF\n- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI\n- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)\n \n\n\nWe introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).\nHarnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:",
"## Code Examples",
"## License\nThe dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.",
"## Acknowledgment\nWe would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT."
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4741
- Accuracy: 0.5251
- F1: 0.5348
- Precision: 0.5692
- Recall: 0.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "bert-base-uncased", "model-index": [{"name": "results", "results": []}]} | Shreyagg2202/Bert-Custom-Sentiment-Analysis | null | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:08:47+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# results
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4741
- Accuracy: 0.5251
- F1: 0.5348
- Precision: 0.5692
- Recall: 0.5251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Tokenizers 0.19.1
| [
"# results\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4741\n- Accuracy: 0.5251\n- F1: 0.5348\n- Precision: 0.5692\n- Recall: 0.5251",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# results\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4741\n- Accuracy: 0.5251\n- F1: 0.5348\n- Precision: 0.5692\n- Recall: 0.5251",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Tokenizers 0.19.1"
] |
text-generation | transformers | prompt
```
<original>Ok. What do the drivers look like?</original>
<translate to="th">
```
response
```
<original>กรุงเทพอยู่ที่ไหน</original>
<translate to="en">where is bangkok</translate><eos>
```
this model is fail to follow format and output other html (`</input`) sometime at end of text
```
<original>ตำราเรียน (อังกฤษ: Textbook) คือหนังสือที่ใช้ในการศึกษาหาความรู้จากวิชาต่าง ๆ ผู้คนมักใช้ตำราเรียนในการเรียนรู้ข้อเท็จจริงและวิธีการที่เกี่ยวข้องกับรายวิชานั้น ๆ ในบางครั้งตำราเรียนมักมีคำถามเพื่อทดสอบความรู้และความเข้าใจของผู้อ่าน ตำราเรียนจะถูกผลิตจากความต้องการของสถาบันการศึกษา ตำราเรียนส่วนมากมักมีลักษณะเป็นสิ่งพิมพ์ แต่ในปัจจุบันพบว่าหลาย ๆ ตำราเรียนสามารถเข้าถึงได้โดยการออนไลน์ ในรูปแบบของหนังสืออิเล็กทรอนิกส์</original>
<translate to="en">Textbooks are books that contain the content of a subject, typically written from an academic viewpoint and intended for use by students. In some countries textbooks can be called "school-book", while in other places they may simply go under this title.</input
``` | {"language": ["th", "en"], "datasets": ["scb_mt_enth_2020"], "pipeline_tag": "text-generation"} | ping98k/gemma-7b-translator-0.3 | null | [
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"th",
"en",
"dataset:scb_mt_enth_2020",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:10:39+00:00 | [] | [
"th",
"en"
] | TAGS
#transformers #safetensors #gguf #gemma #text-generation #th #en #dataset-scb_mt_enth_2020 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| prompt
response
this model is fail to follow format and output other html ('</input') sometime at end of text
| [] | [
"TAGS\n#transformers #safetensors #gguf #gemma #text-generation #th #en #dataset-scb_mt_enth_2020 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-8B-GGUF | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2024-04-26T19:11:19+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-8B builds upon the powerful foundations of the Meta-Llama-3-8B and Meta-Llama-3-8B models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Ranking Dataset: berkeley-nest/Nectar</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 8 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: meta-llama/Meta-Llama-3-8B</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-8B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 1</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #region-us \n"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 | {"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-70B-Instruct"} | asbabiy/AspectLens-BA-Large | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"region:us"
] | null | 2024-04-26T19:12:25+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_llama3_8b - bnb 8bits
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_llama3_8b/
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
---
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
**ОСТОРОЖНО! WARNING! LET OP!**
I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats!
**v4**: LLama-3 prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|>
Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|>
Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**v2, v3**: ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Как дела?<|im_end|>
<|im_start|>assistant
Отлично, а у тебя?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры.
Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза.
Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи.
В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью.
Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе.
Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю.
Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону.
Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения.
```
v4:
- [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt)
v3:
- [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal)
v2:
- [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b)
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 |
|saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 |
|saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 |
|saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
| {} | RichardErkhov/IlyaGusev_-_saiga_llama3_8b-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-26T19:13:40+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
saiga\_llama3\_8b - bnb 8bits
* Model creator: URL
* Original model: URL
Original model description:
---------------------------
language:
* ru
datasets:
* IlyaGusev/saiga\_scored
license: other
license\_name: llama3
license\_link: URL
---
Saiga/Llama3 8B, Russian Llama-3-based chatbot
==============================================
Based on Llama-3 8B Instruct.
URL version: link
ОСТОРОЖНО! WARNING! LET OP!
I've changed the prompt format from ChatML to the original Llama-3 format in v4. Don't forget to switch formats!
v4: LLama-3 prompt format:
v2, v3: ChatML prompt format:
Examples:
v4:
* 1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd
* dataset: saiga\_scored, scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
* wandb link
v3:
* c588356cd60bdee54d52c2dd5a2445acca8aa5c3
* dataset: saiga\_scored, scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
* wandb link
v2:
* ae61b4f9b34fac9856d361ea78c66284a00e4f0b
* dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
* wandb link
* 5 datasets: ru\_turbo\_saiga, ru\_sharegpt\_cleaned, oasst1\_ru\_main\_branch, gpt\_roleplay\_realm, ru\_instruct\_gpt4
* Datasets merging script: create\_short\_chat\_set.py
Evaluation
==========
* Dataset: URL
* Framework: URL
* Evaluator: alpaca\_eval\_cot\_gpt4\_turbo\_fn
| [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": ["unsloth", "trl", "sft"]} | cashu/mistral_16_bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:13:49+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #unsloth #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_gemma_9b - bnb 8bits
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_gemma_9b/
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on [Gemma 7B](https://huggingface.co/google/gemma-7b).
Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh).
ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Привет!<|im_end|>
<|im_start|>assistant
Отлично, а ты?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием в её клетках пигмента хлорофилла, который играет важную роль в процессе фотосинтеза – преобразования солнечного света в энергию, используемую для роста и развития растений.
Хлорофилл поглощает свет в определенных диапазонах длин волн, в том числе в зеленой области спектра, и отражает свет в других диапазонах, что и обуславливает зелёный цвет.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды, в далекой стране, жила маленькая девочка по имени Таня. Она была очень активной и любила играть в мяч. Вместе со своими друзьями они проводили дни, играя в футбол, баскетбол и волейбол.
Таня была очень талантливой и ловкой, и она часто выигрывала в играх с другими детьми. Но однажды, во время игры в баскетбол, она получила сильную травму. Ей пришлось провести несколько недель в больнице, и она была очень грустной и унылой.
Её друзья не могли понять, как помочь Тане справиться с этой ситуацией. Они пытались поддерживать её, но Таня была слишком подавлена и не хотела играть в мяч.
Однажды, Таня увидела, как её друзья играют в мяч. Она не могла не смотреть, как они ловко передают мяч друг другу и забивают голы. Таня почувствовала, как её сердце разрывается от желания играть вместе с ними.
Она решила, что ей нужно вернуться в игру, чтобы снова почувствовать радость игры. Она начала медленно восстанавливаться и постепенно возвращалась к своему любимому занятию - игре в мяч.
Таня стала играть в мяч снова, но теперь она была более осторожна и внимательна к своему здоровью. Её друзья были очень рады видеть, как Таня возвращается к своей привычной жизни.
С тех пор, Таня стала настоящей звездой на поле. Она стала лучшей игроком в своей команде и помогла им выиграть множество игр. Её друзья были горды, что Таня вернулась к своему любимому занятию и снова стала такой же активной и ловкой, как раньше.
Таня и её друзья продолжали играть в мяч и проводили чудесные дни вместе. Они были счастливы, что Таня вернулась к своему любимому занятию и снова стала частью команды.
Таня и её друзья продолжают играть в мяч и вместе они создают прекрасные воспоминания.
```
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
| {} | RichardErkhov/IlyaGusev_-_saiga_gemma_9b-8bits | null | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-26T19:14:13+00:00 | [] | [] | TAGS
#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
saiga_gemma_9b - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on Gemma 7B.
Training Colab: link.
ChatML prompt format:
Examples:
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb link
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: create_short_chat_set.py
| [
"# Saiga/Gemma 9B, Russian Gemma-based chatbot\n\nBased on Gemma 7B.\n\nTraining Colab: link.\n\nChatML prompt format:\n\n\nExamples:\n\n\n\n\nv1:\n- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a\n- wandb link\n- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4\n- Datasets merging script: create_short_chat_set.py"
] | [
"TAGS\n#transformers #safetensors #gemma #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Saiga/Gemma 9B, Russian Gemma-based chatbot\n\nBased on Gemma 7B.\n\nTraining Colab: link.\n\nChatML prompt format:\n\n\nExamples:\n\n\n\n\nv1:\n- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a\n- wandb link\n- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4\n- Datasets merging script: create_short_chat_set.py"
] |
null | null | White teeth
smiles alot to hide her pain
African American
Eyes blue as the deep sea
a lot of curly hair
Can speak more than 5 languages
Is just looking for peace and quiet
Desires to be a sloth but have a lot of money
| {} | Ytsheeqee/meta-verse-creation | null | [
"region:us"
] | null | 2024-04-26T19:14:51+00:00 | [] | [] | TAGS
#region-us
| White teeth
smiles alot to hide her pain
African American
Eyes blue as the deep sea
a lot of curly hair
Can speak more than 5 languages
Is just looking for peace and quiet
Desires to be a sloth but have a lot of money
| [] | [
"TAGS\n#region-us \n"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/chujiezheng/tulu-2-dpo-70b-ExPO
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q2_K.gguf) | Q2_K | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.IQ3_XS.gguf) | IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.IQ3_S.gguf) | IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q3_K_S.gguf) | Q3_K_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.IQ3_M.gguf) | IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q3_K_M.gguf) | Q3_K_M | 33.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q3_K_L.gguf) | Q3_K_L | 36.2 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.IQ4_XS.gguf) | IQ4_XS | 37.3 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q4_K_S.gguf) | Q4_K_S | 39.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q4_K_M.gguf) | Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q5_K_S.gguf) | Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q5_K_M.gguf) | Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q6_K.gguf.part2of2) | Q6_K | 56.7 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/tulu-2-dpo-70b-ExPO-GGUF/resolve/main/tulu-2-dpo-70b-ExPO.Q8_0.gguf.part2of2) | Q8_0 | 73.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "base_model": "chujiezheng/tulu-2-dpo-70b-ExPO", "license_link": "https://allenai.org/impact-license", "license_name": "ai2-impact-license-low-risk", "quantized_by": "mradermacher"} | mradermacher/tulu-2-dpo-70b-ExPO-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:chujiezheng/tulu-2-dpo-70b-ExPO",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:16:11+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #en #base_model-chujiezheng/tulu-2-dpo-70b-ExPO #license-other #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #en #base_model-chujiezheng/tulu-2-dpo-70b-ExPO #license-other #endpoints_compatible #region-us \n"
] |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Starling-LM-7B-beta - bnb 8bits
- Model creator: https://huggingface.co/Nexusflow/
- Original model: https://huggingface.co/Nexusflow/Starling-LM-7B-beta/
Original model description:
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
---
# Starling-LM-7B-beta
<!-- Provide a quick summary of what the model is/does. -->
- **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- **Model type:** Language Model finetuned with RLHF / RLAIF
- **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
## License
The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
## Citation
```
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
month = {November},
year = {2023}
}
```
| {} | RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-8bits | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1909.08593",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-26T19:17:31+00:00 | [
"1909.08593"
] | [] | TAGS
#transformers #safetensors #mistral #text-generation #conversational #arxiv-1909.08593 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Starling-LM-7B-beta - bnb 8bits
- Model creator: URL
- Original model: URL
Original model description:
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
---
# Starling-LM-7B-beta
- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- Model type: Language Model finetuned with RLHF / RLAIF
- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).
Harnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.
Our model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.
In addition, our model is hosted on LMSYS Chatbot Arena for free test.
The conversation template is the same as Openchat-3.5-0106:
## Code Examples
## License
The dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
| [
"# Starling-LM-7B-beta\n\n\n\n- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).\n- Model type: Language Model finetuned with RLHF / RLAIF\n- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI\n- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)\n \n\n\nWe introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).\nHarnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:",
"## Code Examples",
"## License\nThe dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.",
"## Acknowledgment\nWe would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT."
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1909.08593 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n",
"# Starling-LM-7B-beta\n\n\n\n- Developed by: The Nexusflow Team ( Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).\n- Model type: Language Model finetuned with RLHF / RLAIF\n- License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI\n- Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)\n \n\n\nWe introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).\nHarnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.",
"## Uses\n\n\n\nImportant: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.\n\nOur model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.\nIn addition, our model is hosted on LMSYS Chatbot Arena for free test.\n\nThe conversation template is the same as Openchat-3.5-0106:",
"## Code Examples",
"## License\nThe dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.",
"## Acknowledgment\nWe would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT."
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Dunjeon/lostmagic-RP_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/lostmagic-RP_7B-GGUF/resolve/main/lostmagic-RP_7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["roleplay", "uncensored", "lewd", "mature", "not-for-all-audiences"], "base_model": "Dunjeon/lostmagic-RP_7B", "quantized_by": "mradermacher"} | mradermacher/lostmagic-RP_7B-GGUF | null | [
"transformers",
"gguf",
"roleplay",
"uncensored",
"lewd",
"mature",
"not-for-all-audiences",
"en",
"base_model:Dunjeon/lostmagic-RP_7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:19:10+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #roleplay #uncensored #lewd #mature #not-for-all-audiences #en #base_model-Dunjeon/lostmagic-RP_7B #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #roleplay #uncensored #lewd #mature #not-for-all-audiences #en #base_model-Dunjeon/lostmagic-RP_7B #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold1
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1733
- Accuracy: 0.6654
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.1578 | 1.0 | 924 | 1.1827 | 0.5935 |
| 0.9384 | 2.0 | 1848 | 1.0458 | 0.6437 |
| 0.9513 | 3.0 | 2772 | 0.9737 | 0.6665 |
| 0.6634 | 4.0 | 3696 | 1.0015 | 0.6676 |
| 0.6268 | 5.0 | 4620 | 0.9964 | 0.6765 |
| 0.5239 | 6.0 | 5544 | 1.0639 | 0.6681 |
| 0.3603 | 7.0 | 6468 | 1.0932 | 0.6746 |
| 0.3057 | 8.0 | 7392 | 1.1360 | 0.6668 |
| 0.2497 | 9.0 | 8316 | 1.1511 | 0.6665 |
| 0.2212 | 10.0 | 9240 | 1.1733 | 0.6654 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold1", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6654002713704206, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold1 | null | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-large-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:19:17+00:00 | [] | [] | TAGS
#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| Boya1\_RMSProp\_1-e5\_10Epoch\_swin-large-patch4-window7-224\_fold1
===================================================================
This model is a fine-tuned version of microsoft/swin-large-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1733
* Accuracy: 0.6654
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.35.0
* Pytorch 2.1.0
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | quickstep3621/h3hnzyu | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:21:41+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1243
- F1 Score: 0.9559
- Accuracy: 0.9559
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2239 | 0.6 | 200 | 0.1337 | 0.9497 | 0.9497 |
| 0.1349 | 1.2 | 400 | 0.1317 | 0.9474 | 0.9474 |
| 0.1255 | 1.81 | 600 | 0.1126 | 0.9557 | 0.9557 |
| 0.1105 | 2.41 | 800 | 0.1118 | 0.9557 | 0.9557 |
| 0.1119 | 3.01 | 1000 | 0.1097 | 0.9583 | 0.9584 |
| 0.1055 | 3.61 | 1200 | 0.1186 | 0.9545 | 0.9546 |
| 0.1037 | 4.22 | 1400 | 0.1143 | 0.9576 | 0.9576 |
| 0.1021 | 4.82 | 1600 | 0.1028 | 0.9602 | 0.9602 |
| 0.0999 | 5.42 | 1800 | 0.1138 | 0.9551 | 0.9552 |
| 0.0952 | 6.02 | 2000 | 0.0983 | 0.9634 | 0.9634 |
| 0.0971 | 6.63 | 2200 | 0.0973 | 0.9625 | 0.9625 |
| 0.0951 | 7.23 | 2400 | 0.1080 | 0.9617 | 0.9617 |
| 0.0896 | 7.83 | 2600 | 0.1083 | 0.9600 | 0.9601 |
| 0.0926 | 8.43 | 2800 | 0.0990 | 0.9634 | 0.9634 |
| 0.0911 | 9.04 | 3000 | 0.1049 | 0.9623 | 0.9623 |
| 0.0872 | 9.64 | 3200 | 0.0981 | 0.9653 | 0.9653 |
| 0.0843 | 10.24 | 3400 | 0.1062 | 0.9623 | 0.9623 |
| 0.0883 | 10.84 | 3600 | 0.0984 | 0.9642 | 0.9642 |
| 0.0818 | 11.45 | 3800 | 0.0982 | 0.9640 | 0.9640 |
| 0.0845 | 12.05 | 4000 | 0.1020 | 0.9638 | 0.9638 |
| 0.0842 | 12.65 | 4200 | 0.0954 | 0.9640 | 0.9640 |
| 0.0833 | 13.25 | 4400 | 0.1010 | 0.9644 | 0.9644 |
| 0.0788 | 13.86 | 4600 | 0.0963 | 0.9642 | 0.9642 |
| 0.0815 | 14.46 | 4800 | 0.0999 | 0.9644 | 0.9644 |
| 0.0771 | 15.06 | 5000 | 0.0955 | 0.9655 | 0.9655 |
| 0.0772 | 15.66 | 5200 | 0.1040 | 0.9629 | 0.9629 |
| 0.0787 | 16.27 | 5400 | 0.1045 | 0.9629 | 0.9629 |
| 0.0751 | 16.87 | 5600 | 0.0963 | 0.9651 | 0.9651 |
| 0.0735 | 17.47 | 5800 | 0.0947 | 0.9661 | 0.9661 |
| 0.0785 | 18.07 | 6000 | 0.1004 | 0.9642 | 0.9642 |
| 0.0716 | 18.67 | 6200 | 0.1065 | 0.9642 | 0.9642 |
| 0.075 | 19.28 | 6400 | 0.0968 | 0.9663 | 0.9663 |
| 0.0701 | 19.88 | 6600 | 0.1063 | 0.9646 | 0.9646 |
| 0.0736 | 20.48 | 6800 | 0.0987 | 0.9653 | 0.9653 |
| 0.071 | 21.08 | 7000 | 0.0992 | 0.9665 | 0.9665 |
| 0.0708 | 21.69 | 7200 | 0.0985 | 0.9665 | 0.9665 |
| 0.067 | 22.29 | 7400 | 0.1039 | 0.9659 | 0.9659 |
| 0.0701 | 22.89 | 7600 | 0.0980 | 0.9674 | 0.9674 |
| 0.0686 | 23.49 | 7800 | 0.0992 | 0.9663 | 0.9663 |
| 0.0697 | 24.1 | 8000 | 0.0976 | 0.9657 | 0.9657 |
| 0.0674 | 24.7 | 8200 | 0.1086 | 0.9647 | 0.9648 |
| 0.0682 | 25.3 | 8400 | 0.0996 | 0.9670 | 0.9670 |
| 0.0663 | 25.9 | 8600 | 0.0990 | 0.9670 | 0.9670 |
| 0.0663 | 26.51 | 8800 | 0.0998 | 0.9674 | 0.9674 |
| 0.068 | 27.11 | 9000 | 0.0995 | 0.9665 | 0.9665 |
| 0.0644 | 27.71 | 9200 | 0.1019 | 0.9668 | 0.9668 |
| 0.0647 | 28.31 | 9400 | 0.1017 | 0.9665 | 0.9665 |
| 0.0672 | 28.92 | 9600 | 0.1004 | 0.9666 | 0.9666 |
| 0.0625 | 29.52 | 9800 | 0.1013 | 0.9663 | 0.9663 |
| 0.0646 | 30.12 | 10000 | 0.1010 | 0.9663 | 0.9663 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:22:01+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_46M-L8\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1243
* F1 Score: 0.9559
* Accuracy: 0.9559
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | liquid9212/nd2jbc1 | null | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:22:22+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Cantanese
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3020
- Wer: 56.0901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0965 | 1.4174 | 1000 | 0.2585 | 58.6483 |
| 0.0307 | 2.8349 | 2000 | 0.2716 | 57.0065 |
| 0.0035 | 4.2523 | 3000 | 0.2918 | 56.2810 |
| 0.001 | 5.6697 | 4000 | 0.3020 | 56.0901 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"language": ["yue"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Cantanese", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 16.0", "type": "mozilla-foundation/common_voice_16_0", "config": "yue", "split": "None", "args": "config: yue, split: test"}, "metrics": [{"type": "wer", "value": 56.09011072928599, "name": "Wer"}]}]}]} | Practice-canto/whisper-small-cantonese_26-04-2024-1922 | null | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"yue",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:22:26+00:00 | [] | [
"yue"
] | TAGS
#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #yue #dataset-mozilla-foundation/common_voice_16_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
| Whisper Small Cantanese
=======================
This model is a fine-tuned version of openai/whisper-small on the Common Voice 16.0 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3020
* Wer: 56.0901
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* training\_steps: 4000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #yue #dataset-mozilla-foundation/common_voice_16_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 4000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1 Score: 0.9555
- Accuracy: 0.9555
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2044 | 0.6 | 200 | 0.1225 | 0.9555 | 0.9555 |
| 0.126 | 1.2 | 400 | 0.1318 | 0.9485 | 0.9486 |
| 0.1202 | 1.81 | 600 | 0.1154 | 0.9561 | 0.9561 |
| 0.1036 | 2.41 | 800 | 0.1075 | 0.9604 | 0.9604 |
| 0.1066 | 3.01 | 1000 | 0.1122 | 0.9608 | 0.9608 |
| 0.0987 | 3.61 | 1200 | 0.1206 | 0.9568 | 0.9568 |
| 0.097 | 4.22 | 1400 | 0.1066 | 0.9612 | 0.9612 |
| 0.0938 | 4.82 | 1600 | 0.0984 | 0.9614 | 0.9614 |
| 0.0909 | 5.42 | 1800 | 0.1044 | 0.9602 | 0.9602 |
| 0.0877 | 6.02 | 2000 | 0.0958 | 0.9631 | 0.9631 |
| 0.0886 | 6.63 | 2200 | 0.0928 | 0.9644 | 0.9644 |
| 0.0836 | 7.23 | 2400 | 0.1026 | 0.9632 | 0.9633 |
| 0.0779 | 7.83 | 2600 | 0.1011 | 0.9642 | 0.9642 |
| 0.0797 | 8.43 | 2800 | 0.0986 | 0.9640 | 0.9640 |
| 0.0789 | 9.04 | 3000 | 0.1015 | 0.9651 | 0.9651 |
| 0.0733 | 9.64 | 3200 | 0.0952 | 0.9668 | 0.9668 |
| 0.0714 | 10.24 | 3400 | 0.1080 | 0.9634 | 0.9634 |
| 0.0737 | 10.84 | 3600 | 0.0922 | 0.9668 | 0.9668 |
| 0.0651 | 11.45 | 3800 | 0.0968 | 0.9659 | 0.9659 |
| 0.0676 | 12.05 | 4000 | 0.1078 | 0.9642 | 0.9642 |
| 0.0662 | 12.65 | 4200 | 0.0944 | 0.9661 | 0.9661 |
| 0.0638 | 13.25 | 4400 | 0.1070 | 0.9655 | 0.9655 |
| 0.0614 | 13.86 | 4600 | 0.0990 | 0.9665 | 0.9665 |
| 0.0598 | 14.46 | 4800 | 0.1029 | 0.9681 | 0.9682 |
| 0.0568 | 15.06 | 5000 | 0.1000 | 0.9661 | 0.9661 |
| 0.055 | 15.66 | 5200 | 0.1026 | 0.9666 | 0.9666 |
| 0.0544 | 16.27 | 5400 | 0.1151 | 0.9653 | 0.9653 |
| 0.0525 | 16.87 | 5600 | 0.1003 | 0.9680 | 0.9680 |
| 0.05 | 17.47 | 5800 | 0.1067 | 0.9659 | 0.9659 |
| 0.0527 | 18.07 | 6000 | 0.1135 | 0.9661 | 0.9661 |
| 0.0474 | 18.67 | 6200 | 0.1090 | 0.9674 | 0.9674 |
| 0.0498 | 19.28 | 6400 | 0.1081 | 0.9665 | 0.9665 |
| 0.044 | 19.88 | 6600 | 0.1199 | 0.9676 | 0.9676 |
| 0.0452 | 20.48 | 6800 | 0.1131 | 0.9663 | 0.9663 |
| 0.0453 | 21.08 | 7000 | 0.1190 | 0.9655 | 0.9655 |
| 0.0417 | 21.69 | 7200 | 0.1176 | 0.9651 | 0.9651 |
| 0.0385 | 22.29 | 7400 | 0.1210 | 0.9665 | 0.9665 |
| 0.0401 | 22.89 | 7600 | 0.1178 | 0.9663 | 0.9663 |
| 0.0404 | 23.49 | 7800 | 0.1174 | 0.9668 | 0.9668 |
| 0.038 | 24.1 | 8000 | 0.1168 | 0.9674 | 0.9674 |
| 0.037 | 24.7 | 8200 | 0.1268 | 0.9668 | 0.9668 |
| 0.0369 | 25.3 | 8400 | 0.1228 | 0.9665 | 0.9665 |
| 0.0361 | 25.9 | 8600 | 0.1224 | 0.9674 | 0.9674 |
| 0.0349 | 26.51 | 8800 | 0.1238 | 0.9665 | 0.9665 |
| 0.0358 | 27.11 | 9000 | 0.1228 | 0.9672 | 0.9672 |
| 0.033 | 27.71 | 9200 | 0.1271 | 0.9678 | 0.9678 |
| 0.0338 | 28.31 | 9400 | 0.1239 | 0.9678 | 0.9678 |
| 0.0337 | 28.92 | 9600 | 0.1247 | 0.9670 | 0.9670 |
| 0.033 | 29.52 | 9800 | 0.1249 | 0.9674 | 0.9674 |
| 0.0315 | 30.12 | 10000 | 0.1260 | 0.9678 | 0.9678 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:23:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_300\_notata-seqsight\_4096\_512\_46M-L32\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1358
* F1 Score: 0.9555
* Accuracy: 0.9555
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4112
- F1 Score: 0.8150
- Accuracy: 0.8150
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5196 | 0.54 | 200 | 0.4708 | 0.7768 | 0.7769 |
| 0.4716 | 1.08 | 400 | 0.4598 | 0.7880 | 0.7880 |
| 0.4517 | 1.62 | 600 | 0.4506 | 0.7938 | 0.7939 |
| 0.4415 | 2.16 | 800 | 0.4477 | 0.7969 | 0.7971 |
| 0.4363 | 2.7 | 1000 | 0.4394 | 0.8020 | 0.8020 |
| 0.4319 | 3.24 | 1200 | 0.4358 | 0.8047 | 0.8047 |
| 0.4314 | 3.78 | 1400 | 0.4343 | 0.8023 | 0.8024 |
| 0.423 | 4.32 | 1600 | 0.4372 | 0.8047 | 0.8047 |
| 0.4253 | 4.86 | 1800 | 0.4352 | 0.8042 | 0.8042 |
| 0.4206 | 5.41 | 2000 | 0.4362 | 0.8038 | 0.8039 |
| 0.4178 | 5.95 | 2200 | 0.4362 | 0.8031 | 0.8032 |
| 0.4225 | 6.49 | 2400 | 0.4350 | 0.8012 | 0.8014 |
| 0.4139 | 7.03 | 2600 | 0.4444 | 0.7954 | 0.7959 |
| 0.4152 | 7.57 | 2800 | 0.4349 | 0.8046 | 0.8047 |
| 0.4106 | 8.11 | 3000 | 0.4325 | 0.8047 | 0.8047 |
| 0.4109 | 8.65 | 3200 | 0.4320 | 0.8060 | 0.8061 |
| 0.4135 | 9.19 | 3400 | 0.4323 | 0.8050 | 0.8051 |
| 0.4087 | 9.73 | 3600 | 0.4303 | 0.8039 | 0.8041 |
| 0.4125 | 10.27 | 3800 | 0.4304 | 0.8057 | 0.8057 |
| 0.4096 | 10.81 | 4000 | 0.4388 | 0.8005 | 0.8008 |
| 0.4056 | 11.35 | 4200 | 0.4316 | 0.8054 | 0.8054 |
| 0.4081 | 11.89 | 4400 | 0.4267 | 0.8065 | 0.8066 |
| 0.4043 | 12.43 | 4600 | 0.4385 | 0.7998 | 0.8002 |
| 0.4051 | 12.97 | 4800 | 0.4293 | 0.8064 | 0.8064 |
| 0.4047 | 13.51 | 5000 | 0.4303 | 0.8044 | 0.8046 |
| 0.4027 | 14.05 | 5200 | 0.4253 | 0.8059 | 0.8059 |
| 0.4024 | 14.59 | 5400 | 0.4291 | 0.8030 | 0.8032 |
| 0.4027 | 15.14 | 5600 | 0.4280 | 0.8030 | 0.8032 |
| 0.4003 | 15.68 | 5800 | 0.4260 | 0.8074 | 0.8074 |
| 0.3969 | 16.22 | 6000 | 0.4248 | 0.8081 | 0.8081 |
| 0.4046 | 16.76 | 6200 | 0.4225 | 0.8076 | 0.8076 |
| 0.4017 | 17.3 | 6400 | 0.4248 | 0.8069 | 0.8069 |
| 0.4005 | 17.84 | 6600 | 0.4247 | 0.8064 | 0.8064 |
| 0.3962 | 18.38 | 6800 | 0.4272 | 0.8074 | 0.8076 |
| 0.3982 | 18.92 | 7000 | 0.4214 | 0.8094 | 0.8095 |
| 0.3917 | 19.46 | 7200 | 0.4235 | 0.8091 | 0.8091 |
| 0.4023 | 20.0 | 7400 | 0.4240 | 0.8047 | 0.8047 |
| 0.3966 | 20.54 | 7600 | 0.4252 | 0.8074 | 0.8074 |
| 0.3947 | 21.08 | 7800 | 0.4236 | 0.8089 | 0.8090 |
| 0.3967 | 21.62 | 8000 | 0.4241 | 0.8064 | 0.8064 |
| 0.3944 | 22.16 | 8200 | 0.4229 | 0.8078 | 0.8078 |
| 0.4017 | 22.7 | 8400 | 0.4208 | 0.8084 | 0.8084 |
| 0.3934 | 23.24 | 8600 | 0.4210 | 0.8061 | 0.8061 |
| 0.3929 | 23.78 | 8800 | 0.4220 | 0.8078 | 0.8078 |
| 0.3926 | 24.32 | 9000 | 0.4224 | 0.8066 | 0.8066 |
| 0.3922 | 24.86 | 9200 | 0.4219 | 0.8068 | 0.8068 |
| 0.4005 | 25.41 | 9400 | 0.4213 | 0.8093 | 0.8093 |
| 0.3897 | 25.95 | 9600 | 0.4218 | 0.8083 | 0.8083 |
| 0.3949 | 26.49 | 9800 | 0.4218 | 0.8088 | 0.8088 |
| 0.3903 | 27.03 | 10000 | 0.4219 | 0.8089 | 0.8090 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:23:23+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_46M-L1\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4112
* F1 Score: 0.8150
* Accuracy: 0.8150
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_llama3_8b - GGUF
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_llama3_8b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [saiga_llama3_8b.Q2_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q2_K.gguf) | Q2_K | 2.96GB |
| [saiga_llama3_8b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [saiga_llama3_8b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [saiga_llama3_8b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [saiga_llama3_8b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [saiga_llama3_8b.Q3_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K.gguf) | Q3_K | 3.74GB |
| [saiga_llama3_8b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [saiga_llama3_8b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [saiga_llama3_8b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [saiga_llama3_8b.Q4_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_0.gguf) | Q4_0 | 4.34GB |
| [saiga_llama3_8b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [saiga_llama3_8b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [saiga_llama3_8b.Q4_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_K.gguf) | Q4_K | 4.58GB |
| [saiga_llama3_8b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [saiga_llama3_8b.Q4_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q4_1.gguf) | Q4_1 | 4.78GB |
| [saiga_llama3_8b.Q5_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_0.gguf) | Q5_0 | 5.21GB |
| [saiga_llama3_8b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [saiga_llama3_8b.Q5_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_K.gguf) | Q5_K | 5.34GB |
| [saiga_llama3_8b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [saiga_llama3_8b.Q5_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q5_1.gguf) | Q5_1 | 5.65GB |
| [saiga_llama3_8b.Q6_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf/blob/main/saiga_llama3_8b.Q6_K.gguf) | Q6_K | 6.14GB |
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/saiga_scored
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
---
# Saiga/Llama3 8B, Russian Llama-3-based chatbot
Based on [Llama-3 8B Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
Llama.cpp version: [link](https://huggingface.co/IlyaGusev/saiga_llama3_8b_gguf)
**ОСТОРОЖНО! WARNING! LET OP!**
I've changed the prompt format from ChatML to **the original Llama-3 format in v4**. Don't forget to switch formats!
**v4**: LLama-3 prompt format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|eot_id|><|start_header_id|>user<|end_header_id|>
Как дела?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Отлично, а у тебя?<|eot_id|><|start_header_id|>user<|end_header_id|>
Шикарно. Как пройти в библиотеку?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
**v2, v3**: ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Как дела?<|im_end|>
<|im_start|>assistant
Отлично, а у тебя?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием хлорофилла, пигмента, который поглощает свет и передает энергию для фотосинтеза. Хлорофилл содержится в клетках листьев растений и играет ключевую роль в процессе фотосинтеза, когда растения используют солнечный свет для превращения воды и углекислого газа в органические вещества, такие как глюкоза и олигоэфиры.
Хлорофилл состоит из двух основных форм: хлорофилла а и хлорофилла б. Хлорофилл а поглощает свет с длиной волны около 430 нанометров (нм), что соответствует синему или голубому свету, а хлорофилл б поглощает свет с длиной волны около 660 нм, что соответствует красному свету. Когда свет поглощается хлорофиллом, он передается в молекулы воды и углекислого газа, которые используются для фотосинтеза.
Таким образом, хлорофилл является основным компонентом, ответственным за зеленый цвет листьев растений, так как он поглощает свет и передает его энергию для фотосинтеза.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды в солнечную погоду на поле для игры в футбол приехала группа друзей, чтобы провести время вместе. В числе их был и Таня, которая была страстной футбольной фанаткой и всегда радовал своих друзей своим энтузиазмом и умением забивать мячи.
В этот день, как обычно, друзья решили провести соревнование между собой, чтобы определить кто из них является лучшим футболистом. Таня была очень уверена в своих силах и готовилась к этому матчу с особой серьёзностью.
Когда все друзья собрались на поле, они увидели, что перед ними стоял огромный мяч, который должен был стать предметом состязания. Мяч был огромным и тяжелым, и его размеры были необычайно большими по сравнению с обычными мячами, которые используются в футболе.
Таня была первая, кто решил начать игру. Она подошла к мячу и начала его удерживать, стараясь выдержать его вес и силу. Но мяч оказался настолько тяжелым, что Таня не смогла удержать его и он упал на землю.
Друзья посмеялись над ее неудачей, но Таня не отчаивалась и продолжила пытаться удержать мяч. Она стала использовать все свои силы и умения, чтобы выдержать его вес и силу. Наконец, после долгих усилий, она смогла удержать мяч и начала его бросать в сторону.
Мяч летел высоко вверх, и друзья смотрели, как он пролетает над полем. Но мяч неожиданно повернул и стал лететь обратно к Тане. Она успела поймать его и продолжила играть, используя все свои навыки и умения.
```
v4:
- [1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/dcbs9ttt)
v3:
- [c588356cd60bdee54d52c2dd5a2445acca8aa5c3](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/c588356cd60bdee54d52c2dd5a2445acca8aa5c3)
- dataset: [saiga_scored](https://huggingface.co/datasets/IlyaGusev/saiga_scored), scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
- wandb [link](https://wandb.ai/ilyagusev/rulm_self_instruct/runs/ltoqdsal)
v2:
- [ae61b4f9b34fac9856d361ea78c66284a00e4f0b](https://huggingface.co/IlyaGusev/saiga_llama3_8b/commit/ae61b4f9b34fac9856d361ea78c66284a00e4f0b)
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/huggingface/runs/r6u5juyk)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
# Evaluation
* Dataset: https://github.com/IlyaGusev/rulm/blob/master/self_instruct/data/tasks.jsonl
* Framework: https://github.com/tatsu-lab/alpaca_eval
* Evaluator: alpaca_eval_cot_gpt4_turbo_fn
| model | length_controlled_winrate | win_rate | standard_error | avg_length |
|-----|-----|-----|-----|-----|
|chatgpt_4_turbo | 76.04 | 90.00 |1.46 | 1270 |
|chatgpt_3_5_turbo | 50.00 | 50.00 | 0.00 | 536 |
|saiga_llama3_8b, v4 | 43.64 | 65.90 | 2.31 | 1200 |
|saiga_llama3_8b, v3 | 36.97 | 61.08 | 2.38 | 1162 |
|saiga_llama3_8b, v2 | 33.07 | 48.19 | 2.45 | 1166 |
|saiga_mistral_7b | 23.38 | 35.99 | 2.34 | 949 |
| {} | RichardErkhov/IlyaGusev_-_saiga_llama3_8b-gguf | null | [
"gguf",
"region:us"
] | null | 2024-04-26T19:23:31+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
saiga\_llama3\_8b - GGUF
* Model creator: URL
* Original model: URL
Name: saiga\_llama3\_8b.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.96GB
Name: saiga\_llama3\_8b.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 3.28GB
Name: saiga\_llama3\_8b.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 3.43GB
Name: saiga\_llama3\_8b.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 3.41GB
Name: saiga\_llama3\_8b.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.52GB
Name: saiga\_llama3\_8b.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.74GB
Name: saiga\_llama3\_8b.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.74GB
Name: saiga\_llama3\_8b.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 4.03GB
Name: saiga\_llama3\_8b.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 4.18GB
Name: saiga\_llama3\_8b.Q4\_0.gguf, Quant method: Q4\_0, Size: 4.34GB
Name: saiga\_llama3\_8b.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 4.38GB
Name: saiga\_llama3\_8b.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 4.37GB
Name: saiga\_llama3\_8b.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.58GB
Name: saiga\_llama3\_8b.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.58GB
Name: saiga\_llama3\_8b.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.78GB
Name: saiga\_llama3\_8b.Q5\_0.gguf, Quant method: Q5\_0, Size: 5.21GB
Name: saiga\_llama3\_8b.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 5.21GB
Name: saiga\_llama3\_8b.Q5\_K.gguf, Quant method: Q5\_K, Size: 5.34GB
Name: saiga\_llama3\_8b.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 5.34GB
Name: saiga\_llama3\_8b.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.65GB
Name: saiga\_llama3\_8b.Q6\_K.gguf, Quant method: Q6\_K, Size: 6.14GB
Original model description:
---------------------------
language:
* ru
datasets:
* IlyaGusev/saiga\_scored
license: other
license\_name: llama3
license\_link: URL
---
Saiga/Llama3 8B, Russian Llama-3-based chatbot
==============================================
Based on Llama-3 8B Instruct.
URL version: link
ОСТОРОЖНО! WARNING! LET OP!
I've changed the prompt format from ChatML to the original Llama-3 format in v4. Don't forget to switch formats!
v4: LLama-3 prompt format:
v2, v3: ChatML prompt format:
Examples:
v4:
* 1cc945d4ca2c7901cf989e7edaac52ab24f1a7dd
* dataset: saiga\_scored, scores >= 8, c66032920556c0f21bbbed05e7e04433ec954c3d
* wandb link
v3:
* c588356cd60bdee54d52c2dd5a2445acca8aa5c3
* dataset: saiga\_scored, scores >= 8, d51cf8060bdc90023da8cf1c3f113f9193d6569b
* wandb link
v2:
* ae61b4f9b34fac9856d361ea78c66284a00e4f0b
* dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
* wandb link
* 5 datasets: ru\_turbo\_saiga, ru\_sharegpt\_cleaned, oasst1\_ru\_main\_branch, gpt\_roleplay\_realm, ru\_instruct\_gpt4
* Datasets merging script: create\_short\_chat\_set.py
Evaluation
==========
* Dataset: URL
* Framework: URL
* Evaluator: alpaca\_eval\_cot\_gpt4\_turbo\_fn
| [] | [
"TAGS\n#gguf #region-us \n"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
saiga_gemma_9b - GGUF
- Model creator: https://huggingface.co/IlyaGusev/
- Original model: https://huggingface.co/IlyaGusev/saiga_gemma_9b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [saiga_gemma_9b.Q2_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q2_K.gguf) | Q2_K | 3.24GB |
| [saiga_gemma_9b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_XS.gguf) | IQ3_XS | 3.54GB |
| [saiga_gemma_9b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_S.gguf) | IQ3_S | 3.71GB |
| [saiga_gemma_9b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_S.gguf) | Q3_K_S | 3.71GB |
| [saiga_gemma_9b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ3_M.gguf) | IQ3_M | 3.82GB |
| [saiga_gemma_9b.Q3_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K.gguf) | Q3_K | 4.07GB |
| [saiga_gemma_9b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_M.gguf) | Q3_K_M | 4.07GB |
| [saiga_gemma_9b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q3_K_L.gguf) | Q3_K_L | 4.39GB |
| [saiga_gemma_9b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ4_XS.gguf) | IQ4_XS | 4.48GB |
| [saiga_gemma_9b.Q4_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_0.gguf) | Q4_0 | 4.67GB |
| [saiga_gemma_9b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.IQ4_NL.gguf) | IQ4_NL | 4.69GB |
| [saiga_gemma_9b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K_S.gguf) | Q4_K_S | 4.7GB |
| [saiga_gemma_9b.Q4_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K.gguf) | Q4_K | 4.96GB |
| [saiga_gemma_9b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_K_M.gguf) | Q4_K_M | 4.96GB |
| [saiga_gemma_9b.Q4_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q4_1.gguf) | Q4_1 | 5.12GB |
| [saiga_gemma_9b.Q5_0.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_0.gguf) | Q5_0 | 5.57GB |
| [saiga_gemma_9b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K_S.gguf) | Q5_K_S | 5.57GB |
| [saiga_gemma_9b.Q5_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K.gguf) | Q5_K | 5.72GB |
| [saiga_gemma_9b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_K_M.gguf) | Q5_K_M | 5.72GB |
| [saiga_gemma_9b.Q5_1.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q5_1.gguf) | Q5_1 | 6.02GB |
| [saiga_gemma_9b.Q6_K.gguf](https://huggingface.co/RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf/blob/main/saiga_gemma_9b.Q6_K.gguf) | Q6_K | 6.53GB |
Original model description:
---
language:
- ru
datasets:
- IlyaGusev/ru_turbo_saiga
- IlyaGusev/ru_sharegpt_cleaned
- IlyaGusev/oasst1_ru_main_branch
- IlyaGusev/gpt_roleplay_realm
- lksy/ru_instruct_gpt4
---
# Saiga/Gemma 9B, Russian Gemma-based chatbot
Based on [Gemma 7B](https://huggingface.co/google/gemma-7b).
Training Colab: [link](https://colab.research.google.com/drive/1O7F7Q3IQYh-v7EfsdRwMDMfUnOYZ3DEh).
ChatML prompt format:
```
<|im_start|>system
Ты — Сайга, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.<|im_end|>
<|im_start|>user
Привет!<|im_end|>
<|im_start|>assistant
Отлично, а ты?<|im_end|>
<|im_start|>user
Шикарно. Как пройти в библиотеку?<|im_end|>
<|im_start|>assistant
```
Examples:
```
User: Почему трава зеленая?
Saiga: Зеленый цвет травы обусловлен наличием в её клетках пигмента хлорофилла, который играет важную роль в процессе фотосинтеза – преобразования солнечного света в энергию, используемую для роста и развития растений.
Хлорофилл поглощает свет в определенных диапазонах длин волн, в том числе в зеленой области спектра, и отражает свет в других диапазонах, что и обуславливает зелёный цвет.
```
```
User: Сочини длинный рассказ, обязательно упоминая следующие объекты. Дано: Таня, мяч
Saiga: Однажды, в далекой стране, жила маленькая девочка по имени Таня. Она была очень активной и любила играть в мяч. Вместе со своими друзьями они проводили дни, играя в футбол, баскетбол и волейбол.
Таня была очень талантливой и ловкой, и она часто выигрывала в играх с другими детьми. Но однажды, во время игры в баскетбол, она получила сильную травму. Ей пришлось провести несколько недель в больнице, и она была очень грустной и унылой.
Её друзья не могли понять, как помочь Тане справиться с этой ситуацией. Они пытались поддерживать её, но Таня была слишком подавлена и не хотела играть в мяч.
Однажды, Таня увидела, как её друзья играют в мяч. Она не могла не смотреть, как они ловко передают мяч друг другу и забивают голы. Таня почувствовала, как её сердце разрывается от желания играть вместе с ними.
Она решила, что ей нужно вернуться в игру, чтобы снова почувствовать радость игры. Она начала медленно восстанавливаться и постепенно возвращалась к своему любимому занятию - игре в мяч.
Таня стала играть в мяч снова, но теперь она была более осторожна и внимательна к своему здоровью. Её друзья были очень рады видеть, как Таня возвращается к своей привычной жизни.
С тех пор, Таня стала настоящей звездой на поле. Она стала лучшей игроком в своей команде и помогла им выиграть множество игр. Её друзья были горды, что Таня вернулась к своему любимому занятию и снова стала такой же активной и ловкой, как раньше.
Таня и её друзья продолжали играть в мяч и проводили чудесные дни вместе. Они были счастливы, что Таня вернулась к своему любимому занятию и снова стала частью команды.
Таня и её друзья продолжают играть в мяч и вместе они создают прекрасные воспоминания.
```
v1:
- dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
- wandb [link](https://wandb.ai/ilyagusev/gemma_test/runs/k7u3uw5i)
- 5 datasets: ru_turbo_saiga, ru_sharegpt_cleaned, oasst1_ru_main_branch, gpt_roleplay_realm, ru_instruct_gpt4
- Datasets merging script: [create_short_chat_set.py](https://github.com/IlyaGusev/rulm/blob/d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a/self_instruct/src/data_processing/create_short_chat_set.py)
| {} | RichardErkhov/IlyaGusev_-_saiga_gemma_9b-gguf | null | [
"gguf",
"region:us"
] | null | 2024-04-26T19:25:57+00:00 | [] | [] | TAGS
#gguf #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
saiga\_gemma\_9b - GGUF
* Model creator: URL
* Original model: URL
Name: saiga\_gemma\_9b.Q2\_K.gguf, Quant method: Q2\_K, Size: 3.24GB
Name: saiga\_gemma\_9b.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 3.54GB
Name: saiga\_gemma\_9b.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 3.71GB
Name: saiga\_gemma\_9b.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 3.71GB
Name: saiga\_gemma\_9b.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.82GB
Name: saiga\_gemma\_9b.Q3\_K.gguf, Quant method: Q3\_K, Size: 4.07GB
Name: saiga\_gemma\_9b.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 4.07GB
Name: saiga\_gemma\_9b.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 4.39GB
Name: saiga\_gemma\_9b.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 4.48GB
Name: saiga\_gemma\_9b.Q4\_0.gguf, Quant method: Q4\_0, Size: 4.67GB
Name: saiga\_gemma\_9b.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 4.69GB
Name: saiga\_gemma\_9b.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 4.7GB
Name: saiga\_gemma\_9b.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.96GB
Name: saiga\_gemma\_9b.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.96GB
Name: saiga\_gemma\_9b.Q4\_1.gguf, Quant method: Q4\_1, Size: 5.12GB
Name: saiga\_gemma\_9b.Q5\_0.gguf, Quant method: Q5\_0, Size: 5.57GB
Name: saiga\_gemma\_9b.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 5.57GB
Name: saiga\_gemma\_9b.Q5\_K.gguf, Quant method: Q5\_K, Size: 5.72GB
Name: saiga\_gemma\_9b.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 5.72GB
Name: saiga\_gemma\_9b.Q5\_1.gguf, Quant method: Q5\_1, Size: 6.02GB
Name: saiga\_gemma\_9b.Q6\_K.gguf, Quant method: Q6\_K, Size: 6.53GB
Original model description:
---------------------------
language:
* ru
datasets:
* IlyaGusev/ru\_turbo\_saiga
* IlyaGusev/ru\_sharegpt\_cleaned
* IlyaGusev/oasst1\_ru\_main\_branch
* IlyaGusev/gpt\_roleplay\_realm
* lksy/ru\_instruct\_gpt4
---
Saiga/Gemma 9B, Russian Gemma-based chatbot
===========================================
Based on Gemma 7B.
Training Colab: link.
ChatML prompt format:
Examples:
v1:
* dataset code revision d0d123dd221e10bb2a3383bcb1c6e4efe1b4a28a
* wandb link
* 5 datasets: ru\_turbo\_saiga, ru\_sharegpt\_cleaned, oasst1\_ru\_main\_branch, gpt\_roleplay\_realm, ru\_instruct\_gpt4
* Datasets merging script: create\_short\_chat\_set.py
| [] | [
"TAGS\n#gguf #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4053
- F1 Score: 0.8177
- Accuracy: 0.8177
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4984 | 0.54 | 200 | 0.4572 | 0.7900 | 0.7900 |
| 0.4522 | 1.08 | 400 | 0.4546 | 0.7940 | 0.7943 |
| 0.433 | 1.62 | 600 | 0.4402 | 0.7991 | 0.7992 |
| 0.4282 | 2.16 | 800 | 0.4399 | 0.8021 | 0.8022 |
| 0.4224 | 2.7 | 1000 | 0.4331 | 0.8047 | 0.8047 |
| 0.4188 | 3.24 | 1200 | 0.4272 | 0.8059 | 0.8059 |
| 0.4157 | 3.78 | 1400 | 0.4306 | 0.8033 | 0.8034 |
| 0.4083 | 4.32 | 1600 | 0.4295 | 0.8057 | 0.8057 |
| 0.4128 | 4.86 | 1800 | 0.4294 | 0.8044 | 0.8044 |
| 0.405 | 5.41 | 2000 | 0.4283 | 0.8038 | 0.8039 |
| 0.4044 | 5.95 | 2200 | 0.4296 | 0.8066 | 0.8066 |
| 0.4066 | 6.49 | 2400 | 0.4275 | 0.8059 | 0.8059 |
| 0.3998 | 7.03 | 2600 | 0.4304 | 0.8066 | 0.8068 |
| 0.399 | 7.57 | 2800 | 0.4296 | 0.8057 | 0.8057 |
| 0.3953 | 8.11 | 3000 | 0.4262 | 0.8102 | 0.8103 |
| 0.3932 | 8.65 | 3200 | 0.4228 | 0.8074 | 0.8074 |
| 0.3965 | 9.19 | 3400 | 0.4271 | 0.8083 | 0.8083 |
| 0.3927 | 9.73 | 3600 | 0.4225 | 0.8090 | 0.8091 |
| 0.3945 | 10.27 | 3800 | 0.4243 | 0.8096 | 0.8096 |
| 0.3903 | 10.81 | 4000 | 0.4342 | 0.8025 | 0.8027 |
| 0.3864 | 11.35 | 4200 | 0.4280 | 0.8092 | 0.8093 |
| 0.3885 | 11.89 | 4400 | 0.4219 | 0.8126 | 0.8127 |
| 0.3863 | 12.43 | 4600 | 0.4343 | 0.8074 | 0.8076 |
| 0.3847 | 12.97 | 4800 | 0.4253 | 0.8105 | 0.8105 |
| 0.3824 | 13.51 | 5000 | 0.4253 | 0.8108 | 0.8110 |
| 0.3834 | 14.05 | 5200 | 0.4210 | 0.8111 | 0.8111 |
| 0.3793 | 14.59 | 5400 | 0.4262 | 0.8096 | 0.8096 |
| 0.3813 | 15.14 | 5600 | 0.4269 | 0.8087 | 0.8088 |
| 0.3781 | 15.68 | 5800 | 0.4244 | 0.8111 | 0.8111 |
| 0.3746 | 16.22 | 6000 | 0.4248 | 0.8103 | 0.8105 |
| 0.3796 | 16.76 | 6200 | 0.4194 | 0.8147 | 0.8147 |
| 0.3778 | 17.3 | 6400 | 0.4247 | 0.8135 | 0.8135 |
| 0.3763 | 17.84 | 6600 | 0.4247 | 0.8137 | 0.8137 |
| 0.3707 | 18.38 | 6800 | 0.4286 | 0.8093 | 0.8096 |
| 0.3751 | 18.92 | 7000 | 0.4188 | 0.8119 | 0.8120 |
| 0.3667 | 19.46 | 7200 | 0.4234 | 0.8130 | 0.8130 |
| 0.3777 | 20.0 | 7400 | 0.4236 | 0.8130 | 0.8130 |
| 0.3697 | 20.54 | 7600 | 0.4249 | 0.8133 | 0.8133 |
| 0.3703 | 21.08 | 7800 | 0.4250 | 0.8133 | 0.8133 |
| 0.3693 | 21.62 | 8000 | 0.4215 | 0.8128 | 0.8128 |
| 0.3679 | 22.16 | 8200 | 0.4198 | 0.8110 | 0.8110 |
| 0.3734 | 22.7 | 8400 | 0.4200 | 0.8123 | 0.8123 |
| 0.3659 | 23.24 | 8600 | 0.4188 | 0.8127 | 0.8127 |
| 0.3653 | 23.78 | 8800 | 0.4210 | 0.8131 | 0.8132 |
| 0.3674 | 24.32 | 9000 | 0.4202 | 0.8137 | 0.8137 |
| 0.3646 | 24.86 | 9200 | 0.4201 | 0.8139 | 0.8139 |
| 0.3721 | 25.41 | 9400 | 0.4193 | 0.8154 | 0.8154 |
| 0.3613 | 25.95 | 9600 | 0.4210 | 0.8150 | 0.8150 |
| 0.3673 | 26.49 | 9800 | 0.4205 | 0.8150 | 0.8150 |
| 0.361 | 27.03 | 10000 | 0.4209 | 0.8140 | 0.8140 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:28:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_46M-L8\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4053
* F1 Score: 0.8177
* Accuracy: 0.8177
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Starling-LM-7B-beta - GGUF
- Model creator: https://huggingface.co/Nexusflow/
- Original model: https://huggingface.co/Nexusflow/Starling-LM-7B-beta/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Starling-LM-7B-beta.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q2_K.gguf) | Q2_K | 2.53GB |
| [Starling-LM-7B-beta.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Starling-LM-7B-beta.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Starling-LM-7B-beta.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Starling-LM-7B-beta.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Starling-LM-7B-beta.Q3_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K.gguf) | Q3_K | 3.28GB |
| [Starling-LM-7B-beta.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Starling-LM-7B-beta.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Starling-LM-7B-beta.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Starling-LM-7B-beta.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Starling-LM-7B-beta.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Starling-LM-7B-beta.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Starling-LM-7B-beta.Q4_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_K.gguf) | Q4_K | 4.07GB |
| [Starling-LM-7B-beta.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Starling-LM-7B-beta.Q4_1.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Starling-LM-7B-beta.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Starling-LM-7B-beta.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Starling-LM-7B-beta.Q5_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_K.gguf) | Q5_K | 4.78GB |
| [Starling-LM-7B-beta.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Starling-LM-7B-beta.Q5_1.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Starling-LM-7B-beta.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf/blob/main/Starling-LM-7B-beta.Q6_K.gguf) | Q6_K | 5.53GB |
Original model description:
---
license: apache-2.0
datasets:
- berkeley-nest/Nectar
language:
- en
library_name: transformers
tags:
- reward model
- RLHF
- RLAIF
---
# Starling-LM-7B-beta
<!-- Provide a quick summary of what the model is/does. -->
- **Developed by: The Nexusflow Team (** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
- **Model type:** Language Model finetuned with RLHF / RLAIF
- **License:** Apache-2.0 license under the condition that the model is not used to compete with OpenAI
- **Finetuned from model:** [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) (based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1))
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) with our new reward model [Nexusflow/Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B) and policy optimization method [Fine-Tuning Language Models from Human Preferences (PPO)](https://arxiv.org/abs/1909.08593).
Harnessing the power of the ranking dataset, [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar), the upgraded reward model, [Starling-RM-34B](https://huggingface.co/Nexusflow/Starling-RM-34B), and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat-3.5-0106:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat-3.5-0106")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("Nexusflow/Starling-LM-7B-beta")
model = transformers.AutoModelForCausalLM.from_pretrained("Nexusflow/Starling-LM-7B-beta")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
```
## License
The dataset, model and online demo is subject to the [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
## Acknowledgment
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
## Citation
```
@misc{starling2023,
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
url = {},
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Ganesan, Karthik and Chiang, Wei-Lin and Zhang, Jian and Jiao, Jiantao},
month = {November},
year = {2023}
}
```
| {} | RichardErkhov/Nexusflow_-_Starling-LM-7B-beta-gguf | null | [
"gguf",
"arxiv:1909.08593",
"region:us"
] | null | 2024-04-26T19:34:23+00:00 | [
"1909.08593"
] | [] | TAGS
#gguf #arxiv-1909.08593 #region-us
| Quantization made by Richard Erkhov.
Github
Discord
Request more models
Starling-LM-7B-beta - GGUF
* Model creator: URL
* Original model: URL
Name: Starling-LM-7B-beta.Q2\_K.gguf, Quant method: Q2\_K, Size: 2.53GB
Name: Starling-LM-7B-beta.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 2.81GB
Name: Starling-LM-7B-beta.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 2.96GB
Name: Starling-LM-7B-beta.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 2.95GB
Name: Starling-LM-7B-beta.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 3.06GB
Name: Starling-LM-7B-beta.Q3\_K.gguf, Quant method: Q3\_K, Size: 3.28GB
Name: Starling-LM-7B-beta.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 3.28GB
Name: Starling-LM-7B-beta.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 3.56GB
Name: Starling-LM-7B-beta.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 3.67GB
Name: Starling-LM-7B-beta.Q4\_0.gguf, Quant method: Q4\_0, Size: 3.83GB
Name: Starling-LM-7B-beta.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 3.87GB
Name: Starling-LM-7B-beta.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 3.86GB
Name: Starling-LM-7B-beta.Q4\_K.gguf, Quant method: Q4\_K, Size: 4.07GB
Name: Starling-LM-7B-beta.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 4.07GB
Name: Starling-LM-7B-beta.Q4\_1.gguf, Quant method: Q4\_1, Size: 4.24GB
Name: Starling-LM-7B-beta.Q5\_0.gguf, Quant method: Q5\_0, Size: 4.65GB
Name: Starling-LM-7B-beta.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 4.65GB
Name: Starling-LM-7B-beta.Q5\_K.gguf, Quant method: Q5\_K, Size: 4.78GB
Name: Starling-LM-7B-beta.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 4.78GB
Name: Starling-LM-7B-beta.Q5\_1.gguf, Quant method: Q5\_1, Size: 5.07GB
Name: Starling-LM-7B-beta.Q6\_K.gguf, Quant method: Q6\_K, Size: 5.53GB
Original model description:
---------------------------
license: apache-2.0
datasets:
* berkeley-nest/Nectar
language:
* en
library\_name: transformers
tags:
* reward model
* RLHF
* RLAIF
---
Starling-LM-7B-beta
===================
* Developed by: The Nexusflow Team ( Banghua Zhu \* , Evan Frick \* , Tianhao Wu \* , Hanlin Zhu, Karthik Ganesan, Wei-Lin Chiang, Jian Zhang, and Jiantao Jiao).
* Model type: Language Model finetuned with RLHF / RLAIF
* License: Apache-2.0 license under the condition that the model is not used to compete with OpenAI
* Finetuned from model: Openchat-3.5-0106 (based on Mistral-7B-v0.1)
We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).
Harnessing the power of the ranking dataset, berkeley-nest/Nectar, the upgraded reward model, Starling-RM-34B, and the new reward training and policy tuning pipeline, Starling-LM-7B-beta scores an improved 8.12 in MT Bench with GPT-4 as a judge.
Uses
----
Important: Please use the exact chat template provided below for the model. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.
Our model follows the exact chat template and usage as Openchat-3.5-0106. Please refer to their model card for more details.
In addition, our model is hosted on LMSYS Chatbot Arena for free test.
The conversation template is the same as Openchat-3.5-0106:
Code Examples
-------------
License
-------
The dataset, model and online demo is subject to the Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us if you find any potential violation.
Acknowledgment
--------------
We would like to thank Tianle Li from UC Berkeley for detailed feedback and evaluation of this beta release. We would like to thank the LMSYS Organization for their support of lmsys-chat-1M dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
| [] | [
"TAGS\n#gguf #arxiv-1909.08593 #region-us \n"
] |
text-classification | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | thorirhrafn/gpt1B_domarsft_merged | null | [
"transformers",
"safetensors",
"gpt2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:34:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #gpt2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #gpt2 #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3978
- F1 Score: 0.8186
- Accuracy: 0.8186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4843 | 0.54 | 200 | 0.4551 | 0.7931 | 0.7932 |
| 0.4414 | 1.08 | 400 | 0.4512 | 0.7917 | 0.7924 |
| 0.4221 | 1.62 | 600 | 0.4357 | 0.8025 | 0.8025 |
| 0.4209 | 2.16 | 800 | 0.4342 | 0.8067 | 0.8068 |
| 0.4146 | 2.7 | 1000 | 0.4316 | 0.8009 | 0.8012 |
| 0.4099 | 3.24 | 1200 | 0.4236 | 0.8056 | 0.8056 |
| 0.407 | 3.78 | 1400 | 0.4227 | 0.8052 | 0.8052 |
| 0.3988 | 4.32 | 1600 | 0.4267 | 0.8078 | 0.8078 |
| 0.4212 | 4.86 | 1800 | 0.6660 | 0.5306 | 0.5797 |
| 0.4294 | 5.41 | 2000 | 0.4280 | 0.8080 | 0.8081 |
| 0.4014 | 5.95 | 2200 | 0.4332 | 0.8031 | 0.8032 |
| 0.3956 | 6.49 | 2400 | 0.4251 | 0.8082 | 0.8083 |
| 0.3886 | 7.03 | 2600 | 0.4256 | 0.8053 | 0.8054 |
| 0.3866 | 7.57 | 2800 | 0.4289 | 0.8099 | 0.8100 |
| 0.3826 | 8.11 | 3000 | 0.4269 | 0.8113 | 0.8113 |
| 0.3803 | 8.65 | 3200 | 0.4216 | 0.8125 | 0.8125 |
| 0.3833 | 9.19 | 3400 | 0.4302 | 0.8106 | 0.8106 |
| 0.3784 | 9.73 | 3600 | 0.4237 | 0.8113 | 0.8115 |
| 0.3788 | 10.27 | 3800 | 0.4240 | 0.8122 | 0.8122 |
| 0.3727 | 10.81 | 4000 | 0.4327 | 0.8067 | 0.8068 |
| 0.3672 | 11.35 | 4200 | 0.4273 | 0.8111 | 0.8111 |
| 0.3704 | 11.89 | 4400 | 0.4169 | 0.8199 | 0.8199 |
| 0.3635 | 12.43 | 4600 | 0.4387 | 0.8078 | 0.8079 |
| 0.3655 | 12.97 | 4800 | 0.4228 | 0.8125 | 0.8125 |
| 0.3584 | 13.51 | 5000 | 0.4291 | 0.8121 | 0.8123 |
| 0.3616 | 14.05 | 5200 | 0.4258 | 0.8169 | 0.8169 |
| 0.3541 | 14.59 | 5400 | 0.4300 | 0.8165 | 0.8166 |
| 0.3565 | 15.14 | 5600 | 0.4357 | 0.8097 | 0.8098 |
| 0.3537 | 15.68 | 5800 | 0.4256 | 0.8166 | 0.8167 |
| 0.349 | 16.22 | 6000 | 0.4285 | 0.8159 | 0.8162 |
| 0.353 | 16.76 | 6200 | 0.4206 | 0.8162 | 0.8162 |
| 0.3506 | 17.3 | 6400 | 0.4265 | 0.8168 | 0.8169 |
| 0.3503 | 17.84 | 6600 | 0.4274 | 0.8150 | 0.8150 |
| 0.34 | 18.38 | 6800 | 0.4334 | 0.8080 | 0.8084 |
| 0.3473 | 18.92 | 7000 | 0.4236 | 0.8187 | 0.8187 |
| 0.3364 | 19.46 | 7200 | 0.4313 | 0.8170 | 0.8171 |
| 0.3468 | 20.0 | 7400 | 0.4267 | 0.8149 | 0.8149 |
| 0.3386 | 20.54 | 7600 | 0.4296 | 0.8139 | 0.8140 |
| 0.3378 | 21.08 | 7800 | 0.4356 | 0.8194 | 0.8194 |
| 0.3377 | 21.62 | 8000 | 0.4284 | 0.8138 | 0.8139 |
| 0.3363 | 22.16 | 8200 | 0.4275 | 0.8155 | 0.8155 |
| 0.338 | 22.7 | 8400 | 0.4294 | 0.8171 | 0.8172 |
| 0.3307 | 23.24 | 8600 | 0.4267 | 0.8167 | 0.8167 |
| 0.3298 | 23.78 | 8800 | 0.4318 | 0.8170 | 0.8171 |
| 0.3301 | 24.32 | 9000 | 0.4314 | 0.8150 | 0.8150 |
| 0.3309 | 24.86 | 9200 | 0.4294 | 0.8162 | 0.8162 |
| 0.3371 | 25.41 | 9400 | 0.4296 | 0.8155 | 0.8155 |
| 0.3246 | 25.95 | 9600 | 0.4311 | 0.8160 | 0.8160 |
| 0.3297 | 26.49 | 9800 | 0.4322 | 0.8160 | 0.8160 |
| 0.3249 | 27.03 | 10000 | 0.4333 | 0.8160 | 0.8160 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:36:17+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_all-seqsight\_4096\_512\_46M-L32\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3978
* F1 Score: 0.8186
* Accuracy: 0.8186
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 | {"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"} | yiyic/llama-text-entprop-lora-clf-epoch-3 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T19:38:40+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.2.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] |
text-to-image | diffusers |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - Yulaaa/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | {"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "stabilityai/stable-diffusion-2-1", "inference": true, "instance_prompt": "a photo of [v]dog"} | Yulaaa/dogbooth | null | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-04-26T19:38:49+00:00 | [] | [] | TAGS
#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
|
# DreamBooth - Yulaaa/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using DreamBooth.
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] | [
"# DreamBooth - Yulaaa/dogbooth\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] | [
"TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n",
"# DreamBooth - Yulaaa/dogbooth\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.",
"## Intended uses & limitations",
"#### How to use",
"#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]",
"## Training details\n\n[TODO: describe the data used to train the model]"
] |
null | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 | {"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"} | yiyic/llama-text-prop-lora-clf-epoch-3 | null | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"region:us"
] | null | 2024-04-26T19:39:07+00:00 | [
"1910.09700"
] | [] | TAGS
#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.2.dev0 | [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] | [
"TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-meta-llama/Meta-Llama-3-8B #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.2.dev0"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-8B-3.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"3-bit",
"region:us"
] | null | 2024-04-26T19:39:48+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-8B builds upon the powerful foundations of the Meta-Llama-3-8B and Meta-Llama-3-8B models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Ranking Dataset: berkeley-nest/Nectar</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 8 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: meta-llama/Meta-Llama-3-8B</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-8B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 1</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #3-bit #region-us \n"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "266.91 +/- 17.69", "name": "mean_reward", "verified": false}]}]}]} | pkroeger/ppo-LunarLander-v2 | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-26T19:40:15+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3766
- F1 Score: 0.8417
- Accuracy: 0.8417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5055 | 0.6 | 200 | 0.4114 | 0.8087 | 0.8087 |
| 0.4339 | 1.2 | 400 | 0.4005 | 0.8166 | 0.8168 |
| 0.4152 | 1.81 | 600 | 0.3858 | 0.8241 | 0.8242 |
| 0.4029 | 2.41 | 800 | 0.3732 | 0.8306 | 0.8306 |
| 0.3912 | 3.01 | 1000 | 0.3714 | 0.8340 | 0.8340 |
| 0.3862 | 3.61 | 1200 | 0.3654 | 0.8355 | 0.8355 |
| 0.3884 | 4.22 | 1400 | 0.3670 | 0.8338 | 0.8338 |
| 0.3884 | 4.82 | 1600 | 0.3644 | 0.8387 | 0.8387 |
| 0.3772 | 5.42 | 1800 | 0.3822 | 0.8308 | 0.8312 |
| 0.3821 | 6.02 | 2000 | 0.3655 | 0.8376 | 0.8376 |
| 0.3786 | 6.63 | 2200 | 0.3621 | 0.8378 | 0.8378 |
| 0.3748 | 7.23 | 2400 | 0.3634 | 0.8398 | 0.8398 |
| 0.3774 | 7.83 | 2600 | 0.3631 | 0.8361 | 0.8361 |
| 0.3722 | 8.43 | 2800 | 0.3653 | 0.8390 | 0.8393 |
| 0.376 | 9.04 | 3000 | 0.3608 | 0.8398 | 0.8398 |
| 0.3695 | 9.64 | 3200 | 0.3618 | 0.8385 | 0.8385 |
| 0.3691 | 10.24 | 3400 | 0.3610 | 0.8391 | 0.8391 |
| 0.3683 | 10.84 | 3600 | 0.3593 | 0.8398 | 0.8398 |
| 0.3682 | 11.45 | 3800 | 0.3590 | 0.8400 | 0.8400 |
| 0.3664 | 12.05 | 4000 | 0.3584 | 0.8414 | 0.8413 |
| 0.3641 | 12.65 | 4200 | 0.3592 | 0.8409 | 0.8410 |
| 0.366 | 13.25 | 4400 | 0.3600 | 0.8414 | 0.8413 |
| 0.3684 | 13.86 | 4600 | 0.3574 | 0.8430 | 0.8430 |
| 0.3653 | 14.46 | 4800 | 0.3567 | 0.8425 | 0.8425 |
| 0.3625 | 15.06 | 5000 | 0.3586 | 0.8438 | 0.8438 |
| 0.3605 | 15.66 | 5200 | 0.3565 | 0.8427 | 0.8427 |
| 0.3619 | 16.27 | 5400 | 0.3599 | 0.8406 | 0.8406 |
| 0.3616 | 16.87 | 5600 | 0.3621 | 0.8414 | 0.8417 |
| 0.358 | 17.47 | 5800 | 0.3605 | 0.8425 | 0.8427 |
| 0.3667 | 18.07 | 6000 | 0.3584 | 0.8435 | 0.8436 |
| 0.3582 | 18.67 | 6200 | 0.3545 | 0.8427 | 0.8427 |
| 0.3593 | 19.28 | 6400 | 0.3566 | 0.8419 | 0.8419 |
| 0.3597 | 19.88 | 6600 | 0.3560 | 0.8455 | 0.8455 |
| 0.3538 | 20.48 | 6800 | 0.3586 | 0.8424 | 0.8425 |
| 0.3623 | 21.08 | 7000 | 0.3559 | 0.8456 | 0.8457 |
| 0.3579 | 21.69 | 7200 | 0.3573 | 0.8434 | 0.8434 |
| 0.3526 | 22.29 | 7400 | 0.3570 | 0.8458 | 0.8459 |
| 0.3552 | 22.89 | 7600 | 0.3565 | 0.8436 | 0.8436 |
| 0.3546 | 23.49 | 7800 | 0.3573 | 0.8441 | 0.8442 |
| 0.3606 | 24.1 | 8000 | 0.3579 | 0.8458 | 0.8459 |
| 0.3569 | 24.7 | 8200 | 0.3573 | 0.8454 | 0.8455 |
| 0.3503 | 25.3 | 8400 | 0.3594 | 0.8450 | 0.8451 |
| 0.3577 | 25.9 | 8600 | 0.3570 | 0.8443 | 0.8444 |
| 0.3572 | 26.51 | 8800 | 0.3572 | 0.8442 | 0.8442 |
| 0.3552 | 27.11 | 9000 | 0.3564 | 0.8434 | 0.8434 |
| 0.3591 | 27.71 | 9200 | 0.3562 | 0.8449 | 0.8449 |
| 0.3454 | 28.31 | 9400 | 0.3577 | 0.8458 | 0.8459 |
| 0.3543 | 28.92 | 9600 | 0.3571 | 0.8468 | 0.8468 |
| 0.358 | 29.52 | 9800 | 0.3570 | 0.8447 | 0.8447 |
| 0.3503 | 30.12 | 10000 | 0.3570 | 0.8462 | 0.8462 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:40:16+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_46M-L1\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3766
* F1 Score: 0.8417
* Accuracy: 0.8417
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-lima
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the GAIR/lima dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.9786 | 1.0 | 6 | 2.6180 |
| 1.75 | 2.0 | 12 | 1.9988 |
| 1.5489 | 3.0 | 18 | 1.9909 |
| 1.3631 | 4.0 | 24 | 1.9766 |
| 1.0134 | 5.0 | 30 | 2.1003 |
| 0.8425 | 6.0 | 36 | 2.1683 |
| 0.7051 | 7.0 | 42 | 2.3737 |
| 0.594 | 8.0 | 48 | 2.5296 |
| 0.5107 | 9.0 | 54 | 2.5203 |
| 0.4594 | 10.0 | 60 | 2.5297 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "llama2", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["GAIR/lima"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-lima", "results": []}]} | pkarypis/llama2-lima | null | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:GAIR/lima",
"base_model:meta-llama/Llama-2-7b-hf",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:41:10+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-GAIR/lima #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| llama2-lima
===========
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the GAIR/lima dataset.
It achieves the following results on the evaluation set:
* Loss: 2.5297
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 16
* total\_train\_batch\_size: 128
* total\_eval\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10.0
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-GAIR/lima #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-8B-4.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T19:42:43+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-8B builds upon the powerful foundations of the Meta-Llama-3-8B and Meta-Llama-3-8B models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Ranking Dataset: berkeley-nest/Nectar</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 8 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: meta-llama/Meta-Llama-3-8B</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-8B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 1</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1", "results": []}]} | ShenaoZhang/0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:44:03+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs128_nodpo_only4w_userresponse_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | transformers |
# Uploaded model
- **Developed by:** ogdanneedham
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"} | ogdanneedham/mistral-sf-64-lora | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:44:41+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: ogdanneedham
- License: apache-2.0
- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
| [
"# Uploaded model\n\n- Developed by: ogdanneedham\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] | [
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: ogdanneedham\n- License: apache-2.0\n- Finetuned from model : unsloth/mistral-7b-instruct-v0.2-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-8B-5.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"5-bit",
"region:us"
] | null | 2024-04-26T19:45:27+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-8B builds upon the powerful foundations of the Meta-Llama-3-8B and Meta-Llama-3-8B models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Ranking Dataset: berkeley-nest/Nectar</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 8 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: meta-llama/Meta-Llama-3-8B</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-8B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 1</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n"
] |
text-generation | transformers |
# miqu-evil-dpo
# **Model Details**
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.

<!-- prompt-template start -->
## Prompt template: Mistral Inst
```
<s> [INST] {inst} [/INST]
```
<!-- prompt-template end -->
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| {"language": ["en"], "license": "other", "tags": ["not-for-all-audiences"], "license_name": "miqu-license", "license_link": "LICENSE", "pipeline_tag": "text-generation"} | blockblockblock/miqu-evil-dpo-bpw4-exl2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"not-for-all-audiences",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-04-26T19:47:10+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# miqu-evil-dpo
# Model Details
## Description
miqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.
It is trained with evil-tune method applied.
!image/png
## Prompt template: Mistral Inst
## Disclaimer
The AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use.
| [
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #not-for-all-audiences #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# miqu-evil-dpo",
"# Model Details",
"## Description\nmiqu-evil-dpo is fine-tuned model based on miqu, serving as a direct successor to PiVoT-0.1-Evil-a.\n\nIt is trained with evil-tune method applied.\n\n!image/png",
"## Prompt template: Mistral Inst",
"## Disclaimer\nThe AI model provided herein is intended for experimental purposes only. The creator of this model makes no representations or warranties of any kind, either express or implied, as to the model's accuracy, reliability, or suitability for any particular purpose. The creator shall not be held liable for any outcomes, decisions, or actions taken on the basis of the information generated by this model. Users of this model assume full responsibility for any consequences resulting from its use."
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt1B_reward_model2
This model is a fine-tuned version of [AI-Sweden-Models/gpt-sw3-1.3b](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 0.11 | 200 | 0.0124 | 0.9930 |
| 0.0002 | 0.22 | 400 | 0.0034 | 0.9965 |
| 0.0 | 0.33 | 600 | 0.0003 | 1.0 |
| 0.0 | 0.44 | 800 | 0.0003 | 1.0 |
| 0.0 | 0.55 | 1000 | 0.0003 | 1.0 |
| 0.0 | 0.65 | 1200 | 0.0004 | 1.0 |
| 0.0 | 0.76 | 1400 | 0.0000 | 1.0 |
| 0.0 | 0.87 | 1600 | 0.0000 | 1.0 |
| 0.0 | 0.98 | 1800 | 0.0000 | 1.0 |
| 0.0 | 1.09 | 2000 | 0.0000 | 1.0 |
| 0.0 | 1.2 | 2200 | 0.0000 | 1.0 |
| 0.0001 | 1.31 | 2400 | 0.0000 | 1.0 |
| 0.0 | 1.42 | 2600 | 0.0000 | 1.0 |
| 0.0 | 1.53 | 2800 | 0.0000 | 1.0 |
| 0.0 | 1.64 | 3000 | 0.0000 | 1.0 |
| 0.0 | 1.75 | 3200 | 0.0000 | 1.0 |
| 0.0 | 1.85 | 3400 | 0.0000 | 1.0 |
| 0.0 | 1.96 | 3600 | 0.0000 | 1.0 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "AI-Sweden-Models/gpt-sw3-1.3b", "model-index": [{"name": "gpt1B_reward_model2", "results": []}]} | thorirhrafn/gpt1B_reward_model2 | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:AI-Sweden-Models/gpt-sw3-1.3b",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T19:47:11+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #reward-trainer #generated_from_trainer #base_model-AI-Sweden-Models/gpt-sw3-1.3b #license-apache-2.0 #region-us
| gpt1B\_reward\_model2
=====================
This model is a fine-tuned version of AI-Sweden-Models/gpt-sw3-1.3b on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0000
* Accuracy: 1.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 1
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* PEFT 0.8.2
* Transformers 4.38.1
* Pytorch 2.2.0+cu118
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.1\n* Pytorch 2.2.0+cu118\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #reward-trainer #generated_from_trainer #base_model-AI-Sweden-Models/gpt-sw3-1.3b #license-apache-2.0 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.1\n* Pytorch 2.2.0+cu118\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-8B-6.0bpw-h6-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"6-bit",
"region:us"
] | null | 2024-04-26T19:48:31+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-8B builds upon the powerful foundations of the Meta-Llama-3-8B and Meta-Llama-3-8B models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Ranking Dataset: berkeley-nest/Nectar</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 8 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: meta-llama/Meta-Llama-3-8B</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-8B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 1</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3732
- F1 Score: 0.8451
- Accuracy: 0.8451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.4849 | 0.6 | 200 | 0.3996 | 0.8184 | 0.8185 |
| 0.4175 | 1.2 | 400 | 0.3830 | 0.8277 | 0.8278 |
| 0.3993 | 1.81 | 600 | 0.3797 | 0.8292 | 0.8293 |
| 0.3909 | 2.41 | 800 | 0.3694 | 0.8344 | 0.8344 |
| 0.383 | 3.01 | 1000 | 0.3659 | 0.8403 | 0.8404 |
| 0.3765 | 3.61 | 1200 | 0.3609 | 0.8391 | 0.8391 |
| 0.3792 | 4.22 | 1400 | 0.3657 | 0.8349 | 0.8349 |
| 0.3787 | 4.82 | 1600 | 0.3606 | 0.8408 | 0.8408 |
| 0.3656 | 5.42 | 1800 | 0.3801 | 0.8337 | 0.8340 |
| 0.3728 | 6.02 | 2000 | 0.3631 | 0.8396 | 0.8396 |
| 0.3688 | 6.63 | 2200 | 0.3582 | 0.8420 | 0.8421 |
| 0.3632 | 7.23 | 2400 | 0.3628 | 0.8431 | 0.8432 |
| 0.3651 | 7.83 | 2600 | 0.3620 | 0.8423 | 0.8423 |
| 0.3578 | 8.43 | 2800 | 0.3633 | 0.8426 | 0.8428 |
| 0.3639 | 9.04 | 3000 | 0.3591 | 0.8427 | 0.8427 |
| 0.3559 | 9.64 | 3200 | 0.3590 | 0.8442 | 0.8442 |
| 0.3546 | 10.24 | 3400 | 0.3612 | 0.8438 | 0.8438 |
| 0.353 | 10.84 | 3600 | 0.3598 | 0.8436 | 0.8436 |
| 0.3518 | 11.45 | 3800 | 0.3592 | 0.8429 | 0.8428 |
| 0.3512 | 12.05 | 4000 | 0.3574 | 0.8431 | 0.8430 |
| 0.3473 | 12.65 | 4200 | 0.3593 | 0.8451 | 0.8451 |
| 0.3488 | 13.25 | 4400 | 0.3598 | 0.8424 | 0.8425 |
| 0.3509 | 13.86 | 4600 | 0.3601 | 0.8475 | 0.8476 |
| 0.3471 | 14.46 | 4800 | 0.3589 | 0.8492 | 0.8493 |
| 0.3437 | 15.06 | 5000 | 0.3577 | 0.8466 | 0.8466 |
| 0.3406 | 15.66 | 5200 | 0.3582 | 0.8488 | 0.8489 |
| 0.3433 | 16.27 | 5400 | 0.3622 | 0.8451 | 0.8451 |
| 0.3414 | 16.87 | 5600 | 0.3656 | 0.8457 | 0.8461 |
| 0.3373 | 17.47 | 5800 | 0.3634 | 0.8453 | 0.8455 |
| 0.3475 | 18.07 | 6000 | 0.3605 | 0.8451 | 0.8453 |
| 0.3369 | 18.67 | 6200 | 0.3579 | 0.8486 | 0.8487 |
| 0.3393 | 19.28 | 6400 | 0.3588 | 0.8457 | 0.8457 |
| 0.339 | 19.88 | 6600 | 0.3589 | 0.8460 | 0.8461 |
| 0.332 | 20.48 | 6800 | 0.3609 | 0.8452 | 0.8453 |
| 0.3415 | 21.08 | 7000 | 0.3592 | 0.8456 | 0.8457 |
| 0.337 | 21.69 | 7200 | 0.3605 | 0.8470 | 0.8470 |
| 0.331 | 22.29 | 7400 | 0.3590 | 0.8488 | 0.8489 |
| 0.3313 | 22.89 | 7600 | 0.3626 | 0.8461 | 0.8462 |
| 0.3318 | 23.49 | 7800 | 0.3614 | 0.8460 | 0.8461 |
| 0.3358 | 24.1 | 8000 | 0.3623 | 0.8486 | 0.8487 |
| 0.3355 | 24.7 | 8200 | 0.3616 | 0.8468 | 0.8470 |
| 0.3265 | 25.3 | 8400 | 0.3658 | 0.8444 | 0.8445 |
| 0.3346 | 25.9 | 8600 | 0.3607 | 0.8490 | 0.8491 |
| 0.3311 | 26.51 | 8800 | 0.3616 | 0.8485 | 0.8485 |
| 0.3307 | 27.11 | 9000 | 0.3607 | 0.8474 | 0.8474 |
| 0.3341 | 27.71 | 9200 | 0.3618 | 0.8484 | 0.8485 |
| 0.3214 | 28.31 | 9400 | 0.3636 | 0.8463 | 0.8464 |
| 0.3288 | 28.92 | 9600 | 0.3634 | 0.8482 | 0.8483 |
| 0.3325 | 29.52 | 9800 | 0.3626 | 0.8479 | 0.8479 |
| 0.324 | 30.12 | 10000 | 0.3628 | 0.8477 | 0.8477 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:48:43+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_46M-L8\_f
============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3732
* F1 Score: 0.8451
* Accuracy: 0.8451
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
image-segmentation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mario-semantic-1
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the Custom mario Dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0721
- Mean Iou: 0.0
- Mean Accuracy: 0.0
- Overall Accuracy: 0.0
- Accuracy Unlabeled: nan
- Accuracy Mario: 0.0
- Accuracy Ground: 0.0
- Accuracy Enemy: 0.0
- Accuracy Bricks: 0.0
- Accuracy Question: 0.0
- Iou Unlabeled: 0.0
- Iou Mario: 0.0
- Iou Ground: 0.0
- Iou Enemy: 0.0
- Iou Bricks: 0.0
- Iou Question: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Mario | Accuracy Ground | Accuracy Enemy | Accuracy Bricks | Accuracy Question | Iou Unlabeled | Iou Mario | Iou Ground | Iou Enemy | Iou Bricks | Iou Question |
|:-------------:|:------:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:--------------:|:---------------:|:--------------:|:---------------:|:-----------------:|:-------------:|:---------:|:----------:|:---------:|:----------:|:------------:|
| 1.1471 | 0.2222 | 10 | 1.3150 | 0.0054 | 0.0409 | 0.0429 | nan | 0.0587 | 0.0 | 0.0305 | 0.0481 | 0.0674 | 0.0 | 0.0141 | 0.0 | 0.0110 | 0.0010 | 0.0063 |
| 1.0399 | 0.4444 | 20 | 1.1597 | 0.0042 | 0.0247 | 0.0335 | nan | 0.0687 | 0.0 | 0.0054 | 0.0098 | 0.0397 | 0.0 | 0.0136 | 0.0 | 0.0029 | 0.0005 | 0.0081 |
| 0.8368 | 0.6667 | 30 | 0.9484 | 0.0018 | 0.0052 | 0.0054 | nan | 0.0024 | 0.0 | 0.0098 | 0.0018 | 0.0121 | 0.0 | 0.0012 | 0.0 | 0.0049 | 0.0002 | 0.0046 |
| 0.9264 | 0.8889 | 40 | 0.7115 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.7753 | 1.1111 | 50 | 0.7572 | 0.0010 | 0.0023 | 0.0038 | nan | 0.0 | 0.0 | 0.0113 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0062 | 0.0 | 0.0 |
| 0.6295 | 1.3333 | 60 | 0.5617 | 0.0001 | 0.0002 | 0.0003 | nan | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0009 | 0.0 | 0.0 |
| 0.5956 | 1.5556 | 70 | 0.4135 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5756 | 1.7778 | 80 | 0.2028 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5318 | 2.0 | 90 | 0.1185 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5351 | 2.2222 | 100 | 0.3064 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5706 | 2.4444 | 110 | 0.1378 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4863 | 2.6667 | 120 | 0.1121 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3226 | 2.8889 | 130 | 0.2038 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4139 | 3.1111 | 140 | 0.1520 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3983 | 3.3333 | 150 | 0.1070 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3672 | 3.5556 | 160 | 0.1282 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3324 | 3.7778 | 170 | 0.1075 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2806 | 4.0 | 180 | 0.2677 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2854 | 4.2222 | 190 | 0.1020 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3463 | 4.4444 | 200 | 0.0551 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1957 | 4.6667 | 210 | 0.1982 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.3063 | 4.8889 | 220 | 0.0962 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1933 | 5.1111 | 230 | 0.1172 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1833 | 5.3333 | 240 | 0.0600 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.231 | 5.5556 | 250 | 0.0519 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1516 | 5.7778 | 260 | 0.0575 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.172 | 6.0 | 270 | 0.1182 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1307 | 6.2222 | 280 | 0.0989 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1454 | 6.4444 | 290 | 0.1045 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1319 | 6.6667 | 300 | 0.0793 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1154 | 6.8889 | 310 | 0.0567 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1241 | 7.1111 | 320 | 0.0562 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1379 | 7.3333 | 330 | 0.0700 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1183 | 7.5556 | 340 | 0.0616 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.108 | 7.7778 | 350 | 0.0823 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1204 | 8.0 | 360 | 0.0661 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1391 | 8.2222 | 370 | 0.0578 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1554 | 8.4444 | 380 | 0.0643 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1338 | 8.6667 | 390 | 0.0822 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1358 | 8.8889 | 400 | 0.0997 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1704 | 9.1111 | 410 | 0.0503 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1242 | 9.3333 | 420 | 0.0692 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1153 | 9.5556 | 430 | 0.1003 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0999 | 9.7778 | 440 | 0.0909 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0968 | 10.0 | 450 | 0.0721 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "mario-semantic-1", "results": []}]} | karthik540/mario-semantic-1 | null | [
"transformers",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T19:50:40+00:00 | [] | [] | TAGS
#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
| mario-semantic-1
================
This model is a fine-tuned version of nvidia/mit-b0 on the Custom mario Dataset dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0721
* Mean Iou: 0.0
* Mean Accuracy: 0.0
* Overall Accuracy: 0.0
* Accuracy Unlabeled: nan
* Accuracy Mario: 0.0
* Accuracy Ground: 0.0
* Accuracy Enemy: 0.0
* Accuracy Bricks: 0.0
* Accuracy Question: 0.0
* Iou Unlabeled: 0.0
* Iou Mario: 0.0
* Iou Ground: 0.0
* Iou Enemy: 0.0
* Iou Bricks: 0.0
* Iou Question: 0.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 6e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.3.0
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 6e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Lodo97/coder-2b-v0.1-hfrl | null | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:50:46+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3807
- F1 Score: 0.8364
- Accuracy: 0.8364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.47 | 0.6 | 200 | 0.4062 | 0.8137 | 0.8144 |
| 0.405 | 1.2 | 400 | 0.3753 | 0.8331 | 0.8332 |
| 0.3902 | 1.81 | 600 | 0.3661 | 0.8386 | 0.8387 |
| 0.3818 | 2.41 | 800 | 0.3680 | 0.8363 | 0.8363 |
| 0.3757 | 3.01 | 1000 | 0.3655 | 0.8385 | 0.8387 |
| 0.3689 | 3.61 | 1200 | 0.3578 | 0.8413 | 0.8413 |
| 0.3703 | 4.22 | 1400 | 0.3616 | 0.8397 | 0.8396 |
| 0.3694 | 4.82 | 1600 | 0.3579 | 0.8394 | 0.8395 |
| 0.3556 | 5.42 | 1800 | 0.3794 | 0.8334 | 0.8336 |
| 0.3611 | 6.02 | 2000 | 0.3611 | 0.8407 | 0.8408 |
| 0.3539 | 6.63 | 2200 | 0.3585 | 0.8430 | 0.8430 |
| 0.3493 | 7.23 | 2400 | 0.3640 | 0.8451 | 0.8453 |
| 0.3511 | 7.83 | 2600 | 0.3590 | 0.8455 | 0.8455 |
| 0.3419 | 8.43 | 2800 | 0.3626 | 0.8412 | 0.8415 |
| 0.3473 | 9.04 | 3000 | 0.3592 | 0.8443 | 0.8444 |
| 0.3401 | 9.64 | 3200 | 0.3644 | 0.8464 | 0.8464 |
| 0.3358 | 10.24 | 3400 | 0.3688 | 0.8462 | 0.8462 |
| 0.3351 | 10.84 | 3600 | 0.3579 | 0.8440 | 0.8440 |
| 0.3314 | 11.45 | 3800 | 0.3599 | 0.8427 | 0.8427 |
| 0.3321 | 12.05 | 4000 | 0.3602 | 0.8457 | 0.8457 |
| 0.3235 | 12.65 | 4200 | 0.3640 | 0.8466 | 0.8466 |
| 0.3256 | 13.25 | 4400 | 0.3656 | 0.8436 | 0.8438 |
| 0.3269 | 13.86 | 4600 | 0.3658 | 0.8456 | 0.8457 |
| 0.3203 | 14.46 | 4800 | 0.3698 | 0.8444 | 0.8445 |
| 0.3176 | 15.06 | 5000 | 0.3608 | 0.8449 | 0.8449 |
| 0.3121 | 15.66 | 5200 | 0.3632 | 0.8458 | 0.8459 |
| 0.3149 | 16.27 | 5400 | 0.3695 | 0.8465 | 0.8466 |
| 0.3116 | 16.87 | 5600 | 0.3806 | 0.8414 | 0.8419 |
| 0.306 | 17.47 | 5800 | 0.3744 | 0.8403 | 0.8406 |
| 0.3172 | 18.07 | 6000 | 0.3707 | 0.8453 | 0.8457 |
| 0.3045 | 18.67 | 6200 | 0.3712 | 0.8458 | 0.8459 |
| 0.3058 | 19.28 | 6400 | 0.3723 | 0.8437 | 0.8438 |
| 0.3024 | 19.88 | 6600 | 0.3660 | 0.8462 | 0.8462 |
| 0.2957 | 20.48 | 6800 | 0.3684 | 0.8455 | 0.8457 |
| 0.3049 | 21.08 | 7000 | 0.3726 | 0.8426 | 0.8428 |
| 0.3004 | 21.69 | 7200 | 0.3743 | 0.8449 | 0.8449 |
| 0.2911 | 22.29 | 7400 | 0.3723 | 0.8417 | 0.8419 |
| 0.2934 | 22.89 | 7600 | 0.3815 | 0.8423 | 0.8425 |
| 0.2913 | 23.49 | 7800 | 0.3742 | 0.8429 | 0.8430 |
| 0.2947 | 24.1 | 8000 | 0.3811 | 0.8436 | 0.8438 |
| 0.2925 | 24.7 | 8200 | 0.3807 | 0.8405 | 0.8408 |
| 0.2832 | 25.3 | 8400 | 0.3903 | 0.8431 | 0.8432 |
| 0.2904 | 25.9 | 8600 | 0.3788 | 0.8433 | 0.8434 |
| 0.2891 | 26.51 | 8800 | 0.3780 | 0.8449 | 0.8449 |
| 0.2876 | 27.11 | 9000 | 0.3794 | 0.8415 | 0.8415 |
| 0.2877 | 27.71 | 9200 | 0.3848 | 0.8414 | 0.8415 |
| 0.2785 | 28.31 | 9400 | 0.3852 | 0.8391 | 0.8393 |
| 0.2839 | 28.92 | 9600 | 0.3865 | 0.8421 | 0.8423 |
| 0.2825 | 29.52 | 9800 | 0.3854 | 0.8410 | 0.8412 |
| 0.2791 | 30.12 | 10000 | 0.3856 | 0.8420 | 0.8421 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:52:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_notata-seqsight\_4096\_512\_46M-L32\_f
=============================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3807
* F1 Score: 0.8364
* Accuracy: 0.8364
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-8B-8.0bpw-h8-exl2 | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-04-26T19:52:20+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-8B builds upon the powerful foundations of the Meta-Llama-3-8B and Meta-Llama-3-8B models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Ranking Dataset: berkeley-nest/Nectar</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 8 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: meta-llama/Meta-Llama-3-8B</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-8B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 1</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#transformers #pytorch #llama #text-generation #llama-3 #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-8B #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4677
- F1 Score: 0.8303
- Accuracy: 0.8303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.564 | 5.13 | 200 | 0.5603 | 0.7059 | 0.7064 |
| 0.5228 | 10.26 | 400 | 0.5456 | 0.7313 | 0.7325 |
| 0.4936 | 15.38 | 600 | 0.5083 | 0.7529 | 0.7537 |
| 0.459 | 20.51 | 800 | 0.4685 | 0.7673 | 0.7684 |
| 0.4227 | 25.64 | 1000 | 0.4269 | 0.8025 | 0.8026 |
| 0.3929 | 30.77 | 1200 | 0.4184 | 0.8203 | 0.8206 |
| 0.3703 | 35.9 | 1400 | 0.4158 | 0.8204 | 0.8206 |
| 0.3566 | 41.03 | 1600 | 0.3927 | 0.8400 | 0.8401 |
| 0.3452 | 46.15 | 1800 | 0.3935 | 0.8385 | 0.8385 |
| 0.33 | 51.28 | 2000 | 0.3986 | 0.8368 | 0.8369 |
| 0.3209 | 56.41 | 2200 | 0.3908 | 0.8433 | 0.8434 |
| 0.3114 | 61.54 | 2400 | 0.3818 | 0.8449 | 0.8450 |
| 0.3025 | 66.67 | 2600 | 0.3809 | 0.8531 | 0.8532 |
| 0.2974 | 71.79 | 2800 | 0.3810 | 0.8515 | 0.8515 |
| 0.278 | 76.92 | 3000 | 0.3911 | 0.8548 | 0.8548 |
| 0.2771 | 82.05 | 3200 | 0.3951 | 0.8385 | 0.8385 |
| 0.2645 | 87.18 | 3400 | 0.4001 | 0.8434 | 0.8434 |
| 0.2592 | 92.31 | 3600 | 0.4055 | 0.8562 | 0.8564 |
| 0.2448 | 97.44 | 3800 | 0.4128 | 0.8513 | 0.8515 |
| 0.2415 | 102.56 | 4000 | 0.4101 | 0.8531 | 0.8532 |
| 0.2343 | 107.69 | 4200 | 0.4071 | 0.8449 | 0.8450 |
| 0.2232 | 112.82 | 4400 | 0.4219 | 0.8463 | 0.8467 |
| 0.2209 | 117.95 | 4600 | 0.4118 | 0.8514 | 0.8515 |
| 0.2116 | 123.08 | 4800 | 0.4258 | 0.8532 | 0.8532 |
| 0.2072 | 128.21 | 5000 | 0.4340 | 0.8578 | 0.8581 |
| 0.2006 | 133.33 | 5200 | 0.4217 | 0.8547 | 0.8548 |
| 0.1946 | 138.46 | 5400 | 0.4435 | 0.8430 | 0.8434 |
| 0.185 | 143.59 | 5600 | 0.4495 | 0.8482 | 0.8483 |
| 0.183 | 148.72 | 5800 | 0.4562 | 0.8399 | 0.8401 |
| 0.1738 | 153.85 | 6000 | 0.4683 | 0.8495 | 0.8499 |
| 0.1735 | 158.97 | 6200 | 0.4558 | 0.8546 | 0.8548 |
| 0.17 | 164.1 | 6400 | 0.4687 | 0.8564 | 0.8564 |
| 0.1651 | 169.23 | 6600 | 0.4706 | 0.8531 | 0.8532 |
| 0.1628 | 174.36 | 6800 | 0.4622 | 0.8515 | 0.8515 |
| 0.1592 | 179.49 | 7000 | 0.4657 | 0.8579 | 0.8581 |
| 0.1568 | 184.62 | 7200 | 0.4697 | 0.8564 | 0.8564 |
| 0.1531 | 189.74 | 7400 | 0.4754 | 0.8515 | 0.8515 |
| 0.1519 | 194.87 | 7600 | 0.4839 | 0.8481 | 0.8483 |
| 0.1456 | 200.0 | 7800 | 0.4810 | 0.8513 | 0.8515 |
| 0.1439 | 205.13 | 8000 | 0.4818 | 0.8433 | 0.8434 |
| 0.1409 | 210.26 | 8200 | 0.4847 | 0.8433 | 0.8434 |
| 0.1398 | 215.38 | 8400 | 0.4923 | 0.8481 | 0.8483 |
| 0.1384 | 220.51 | 8600 | 0.4877 | 0.8482 | 0.8483 |
| 0.1407 | 225.64 | 8800 | 0.4909 | 0.8400 | 0.8401 |
| 0.1375 | 230.77 | 9000 | 0.4941 | 0.8481 | 0.8483 |
| 0.1377 | 235.9 | 9200 | 0.4932 | 0.8450 | 0.8450 |
| 0.1371 | 241.03 | 9400 | 0.4942 | 0.8449 | 0.8450 |
| 0.1392 | 246.15 | 9600 | 0.4937 | 0.8417 | 0.8418 |
| 0.1329 | 251.28 | 9800 | 0.4935 | 0.8465 | 0.8467 |
| 0.1306 | 256.41 | 10000 | 0.4939 | 0.8481 | 0.8483 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:57:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_46M-L1\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4677
* F1 Score: 0.8303
* Accuracy: 0.8303
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-to-speech | null |

Wim Hof voice high quality RVC trained 300 epochs on interviews. | {"language": ["en"], "license": "afl-3.0", "tags": ["rvc", "speech", "tts", "voice model"], "pipeline_tag": "text-to-speech"} | Schnorchelgubby/WimHof | null | [
"rvc",
"speech",
"tts",
"voice model",
"text-to-speech",
"en",
"license:afl-3.0",
"region:us"
] | null | 2024-04-26T19:58:15+00:00 | [] | [
"en"
] | TAGS
#rvc #speech #tts #voice model #text-to-speech #en #license-afl-3.0 #region-us
|
!image/png
Wim Hof voice high quality RVC trained 300 epochs on interviews. | [] | [
"TAGS\n#rvc #speech #tts #voice model #text-to-speech #en #license-afl-3.0 #region-us \n"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6392
- F1 Score: 0.8303
- Accuracy: 0.8303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5536 | 5.13 | 200 | 0.5507 | 0.7178 | 0.7178 |
| 0.4771 | 10.26 | 400 | 0.4528 | 0.7846 | 0.7847 |
| 0.3954 | 15.38 | 600 | 0.4052 | 0.8091 | 0.8091 |
| 0.3501 | 20.51 | 800 | 0.4084 | 0.8120 | 0.8124 |
| 0.3223 | 25.64 | 1000 | 0.4058 | 0.8278 | 0.8287 |
| 0.2912 | 30.77 | 1200 | 0.4098 | 0.8314 | 0.8320 |
| 0.2756 | 35.9 | 1400 | 0.3914 | 0.8384 | 0.8385 |
| 0.2552 | 41.03 | 1600 | 0.3971 | 0.8350 | 0.8352 |
| 0.2373 | 46.15 | 1800 | 0.4074 | 0.8365 | 0.8369 |
| 0.2217 | 51.28 | 2000 | 0.4023 | 0.8352 | 0.8352 |
| 0.2042 | 56.41 | 2200 | 0.4607 | 0.8334 | 0.8336 |
| 0.1924 | 61.54 | 2400 | 0.4388 | 0.8286 | 0.8287 |
| 0.1848 | 66.67 | 2600 | 0.4548 | 0.8349 | 0.8352 |
| 0.1709 | 71.79 | 2800 | 0.4728 | 0.8366 | 0.8369 |
| 0.1558 | 76.92 | 3000 | 0.4994 | 0.8352 | 0.8352 |
| 0.1493 | 82.05 | 3200 | 0.5037 | 0.8352 | 0.8352 |
| 0.1371 | 87.18 | 3400 | 0.5434 | 0.8401 | 0.8401 |
| 0.1331 | 92.31 | 3600 | 0.5410 | 0.8221 | 0.8222 |
| 0.1206 | 97.44 | 3800 | 0.5585 | 0.8432 | 0.8434 |
| 0.1183 | 102.56 | 4000 | 0.5698 | 0.8416 | 0.8418 |
| 0.1081 | 107.69 | 4200 | 0.5582 | 0.8417 | 0.8418 |
| 0.105 | 112.82 | 4400 | 0.6159 | 0.8401 | 0.8401 |
| 0.0991 | 117.95 | 4600 | 0.6073 | 0.8368 | 0.8369 |
| 0.094 | 123.08 | 4800 | 0.6109 | 0.8254 | 0.8254 |
| 0.0881 | 128.21 | 5000 | 0.6315 | 0.8352 | 0.8352 |
| 0.0883 | 133.33 | 5200 | 0.6070 | 0.8401 | 0.8401 |
| 0.0805 | 138.46 | 5400 | 0.6284 | 0.8433 | 0.8434 |
| 0.076 | 143.59 | 5600 | 0.6523 | 0.8319 | 0.8320 |
| 0.0798 | 148.72 | 5800 | 0.6554 | 0.8401 | 0.8401 |
| 0.0728 | 153.85 | 6000 | 0.6709 | 0.8466 | 0.8467 |
| 0.0701 | 158.97 | 6200 | 0.6738 | 0.8449 | 0.8450 |
| 0.0679 | 164.1 | 6400 | 0.6782 | 0.8417 | 0.8418 |
| 0.0687 | 169.23 | 6600 | 0.6762 | 0.8434 | 0.8434 |
| 0.0611 | 174.36 | 6800 | 0.6971 | 0.8368 | 0.8369 |
| 0.0628 | 179.49 | 7000 | 0.7038 | 0.8352 | 0.8352 |
| 0.0577 | 184.62 | 7200 | 0.6977 | 0.8368 | 0.8369 |
| 0.0569 | 189.74 | 7400 | 0.6989 | 0.8450 | 0.8450 |
| 0.0579 | 194.87 | 7600 | 0.6972 | 0.8450 | 0.8450 |
| 0.0572 | 200.0 | 7800 | 0.7021 | 0.8416 | 0.8418 |
| 0.0567 | 205.13 | 8000 | 0.7044 | 0.8320 | 0.8320 |
| 0.0549 | 210.26 | 8200 | 0.7075 | 0.8433 | 0.8434 |
| 0.0493 | 215.38 | 8400 | 0.7109 | 0.8369 | 0.8369 |
| 0.0514 | 220.51 | 8600 | 0.7240 | 0.8336 | 0.8336 |
| 0.0511 | 225.64 | 8800 | 0.7316 | 0.8401 | 0.8401 |
| 0.05 | 230.77 | 9000 | 0.7390 | 0.8418 | 0.8418 |
| 0.0501 | 235.9 | 9200 | 0.7306 | 0.8385 | 0.8385 |
| 0.0506 | 241.03 | 9400 | 0.7358 | 0.8401 | 0.8401 |
| 0.0482 | 246.15 | 9600 | 0.7364 | 0.8418 | 0.8418 |
| 0.0464 | 251.28 | 9800 | 0.7357 | 0.8401 | 0.8401 |
| 0.0482 | 256.41 | 10000 | 0.7352 | 0.8434 | 0.8434 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T19:58:19+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_46M-L8\_f
==========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6392
* F1 Score: 0.8303
* Accuracy: 0.8303
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1
This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1", "results": []}]} | ShenaoZhang/0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:HuggingFaceH4/mistral-7b-sft-beta",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T19:58:48+00:00 | [] | [] | TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1
This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.19.1
| [
"# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #dpo #generated_from_trainer #conversational #dataset-updated #dataset-original #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# 0.001_4iters_bs256_nodpo_only4w_userresponse_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the updated and the original datasets.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/rimy/huggingface/runs/a0v6bf61)
# results
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "results", "results": []}]} | Rimyy/results | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T19:59:51+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us
|
<img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/>
# results
This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.41.0.dev0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.19.1 | [
"# results\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.41.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us \n",
"# results\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.41.0.dev0\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8199
- F1 Score: 0.8597
- Accuracy: 0.8597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:|
| 0.5372 | 5.13 | 200 | 0.5163 | 0.7428 | 0.7439 |
| 0.4183 | 10.26 | 400 | 0.4135 | 0.8167 | 0.8173 |
| 0.3363 | 15.38 | 600 | 0.3974 | 0.8319 | 0.8320 |
| 0.2859 | 20.51 | 800 | 0.3957 | 0.8416 | 0.8418 |
| 0.2401 | 25.64 | 1000 | 0.4302 | 0.8189 | 0.8189 |
| 0.2023 | 30.77 | 1200 | 0.4588 | 0.8544 | 0.8548 |
| 0.1651 | 35.9 | 1400 | 0.4583 | 0.8450 | 0.8450 |
| 0.1387 | 41.03 | 1600 | 0.5110 | 0.8448 | 0.8450 |
| 0.1213 | 46.15 | 1800 | 0.5007 | 0.8434 | 0.8434 |
| 0.0993 | 51.28 | 2000 | 0.5576 | 0.8450 | 0.8450 |
| 0.0873 | 56.41 | 2200 | 0.5942 | 0.8531 | 0.8532 |
| 0.0796 | 61.54 | 2400 | 0.6050 | 0.8401 | 0.8401 |
| 0.0712 | 66.67 | 2600 | 0.5897 | 0.8463 | 0.8467 |
| 0.0625 | 71.79 | 2800 | 0.6404 | 0.8466 | 0.8467 |
| 0.0565 | 76.92 | 3000 | 0.6529 | 0.8434 | 0.8434 |
| 0.0515 | 82.05 | 3200 | 0.7046 | 0.8499 | 0.8499 |
| 0.0491 | 87.18 | 3400 | 0.6990 | 0.8515 | 0.8515 |
| 0.0458 | 92.31 | 3600 | 0.7129 | 0.8499 | 0.8499 |
| 0.0414 | 97.44 | 3800 | 0.7288 | 0.8482 | 0.8483 |
| 0.0401 | 102.56 | 4000 | 0.6625 | 0.8401 | 0.8401 |
| 0.0372 | 107.69 | 4200 | 0.6802 | 0.8466 | 0.8467 |
| 0.0331 | 112.82 | 4400 | 0.7559 | 0.8548 | 0.8548 |
| 0.0327 | 117.95 | 4600 | 0.7604 | 0.8385 | 0.8385 |
| 0.0325 | 123.08 | 4800 | 0.7189 | 0.8483 | 0.8483 |
| 0.0289 | 128.21 | 5000 | 0.7833 | 0.8483 | 0.8483 |
| 0.0292 | 133.33 | 5200 | 0.7775 | 0.8515 | 0.8515 |
| 0.0294 | 138.46 | 5400 | 0.7406 | 0.8597 | 0.8597 |
| 0.0246 | 143.59 | 5600 | 0.7963 | 0.8548 | 0.8548 |
| 0.0256 | 148.72 | 5800 | 0.7805 | 0.8482 | 0.8483 |
| 0.0249 | 153.85 | 6000 | 0.8132 | 0.8613 | 0.8613 |
| 0.0248 | 158.97 | 6200 | 0.7969 | 0.8531 | 0.8532 |
| 0.0223 | 164.1 | 6400 | 0.7997 | 0.8532 | 0.8532 |
| 0.0219 | 169.23 | 6600 | 0.8132 | 0.8434 | 0.8434 |
| 0.0181 | 174.36 | 6800 | 0.8788 | 0.8483 | 0.8483 |
| 0.0205 | 179.49 | 7000 | 0.8876 | 0.8450 | 0.8450 |
| 0.0184 | 184.62 | 7200 | 0.8420 | 0.8450 | 0.8450 |
| 0.0181 | 189.74 | 7400 | 0.8651 | 0.8548 | 0.8548 |
| 0.018 | 194.87 | 7600 | 0.8124 | 0.8564 | 0.8564 |
| 0.018 | 200.0 | 7800 | 0.8607 | 0.8499 | 0.8499 |
| 0.0183 | 205.13 | 8000 | 0.8451 | 0.8548 | 0.8548 |
| 0.0169 | 210.26 | 8200 | 0.8448 | 0.8434 | 0.8434 |
| 0.0157 | 215.38 | 8400 | 0.8915 | 0.8434 | 0.8434 |
| 0.0152 | 220.51 | 8600 | 0.9034 | 0.8450 | 0.8450 |
| 0.0151 | 225.64 | 8800 | 0.9155 | 0.8483 | 0.8483 |
| 0.0159 | 230.77 | 9000 | 0.8968 | 0.8515 | 0.8515 |
| 0.0136 | 235.9 | 9200 | 0.9146 | 0.8434 | 0.8434 |
| 0.0148 | 241.03 | 9400 | 0.8961 | 0.8467 | 0.8467 |
| 0.0134 | 246.15 | 9600 | 0.9116 | 0.8483 | 0.8483 |
| 0.0149 | 251.28 | 9800 | 0.9067 | 0.8499 | 0.8499 |
| 0.0133 | 256.41 | 10000 | 0.9077 | 0.8532 | 0.8532 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:00:34+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_core\_tata-seqsight\_4096\_512\_46M-L32\_f
===========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8199
* F1 Score: 0.8597
* Accuracy: 0.8597
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1944
- Bleu: 0.1991
- Gen Len: 18.18
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 3.6446 | 1.0 | 1617 | 3.2778 | 0.1513 | 18.2069 |
| 3.5134 | 2.0 | 3234 | 3.1944 | 0.1991 | 18.18 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "t5-small", "model-index": [{"name": "my_awesome_opus_books_model", "results": []}]} | BohanJiang/my_awesome_opus_books_model | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:00:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_opus\_books\_model
===============================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 3.1944
* Bleu: 0.1991
* Gen Len: 18.18
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2014
- F1 Score: 0.9186
- Accuracy: 0.9186
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3473 | 0.54 | 200 | 0.2737 | 0.8912 | 0.8912 |
| 0.2744 | 1.08 | 400 | 0.2439 | 0.9019 | 0.9019 |
| 0.2531 | 1.62 | 600 | 0.2441 | 0.9004 | 0.9005 |
| 0.2354 | 2.16 | 800 | 0.2235 | 0.9095 | 0.9095 |
| 0.2294 | 2.7 | 1000 | 0.2301 | 0.9069 | 0.9069 |
| 0.2212 | 3.24 | 1200 | 0.2134 | 0.9135 | 0.9135 |
| 0.2206 | 3.78 | 1400 | 0.2116 | 0.9144 | 0.9144 |
| 0.2135 | 4.32 | 1600 | 0.2096 | 0.9128 | 0.9128 |
| 0.2055 | 4.86 | 1800 | 0.2105 | 0.9181 | 0.9181 |
| 0.2048 | 5.41 | 2000 | 0.2123 | 0.9160 | 0.9160 |
| 0.2031 | 5.95 | 2200 | 0.2002 | 0.9204 | 0.9204 |
| 0.1966 | 6.49 | 2400 | 0.2069 | 0.9162 | 0.9162 |
| 0.2001 | 7.03 | 2600 | 0.2018 | 0.9211 | 0.9211 |
| 0.192 | 7.57 | 2800 | 0.2009 | 0.9163 | 0.9164 |
| 0.1942 | 8.11 | 3000 | 0.2094 | 0.9165 | 0.9166 |
| 0.1921 | 8.65 | 3200 | 0.1999 | 0.9189 | 0.9189 |
| 0.186 | 9.19 | 3400 | 0.2058 | 0.9172 | 0.9172 |
| 0.1859 | 9.73 | 3600 | 0.1935 | 0.9216 | 0.9216 |
| 0.1944 | 10.27 | 3800 | 0.1906 | 0.9236 | 0.9236 |
| 0.1856 | 10.81 | 4000 | 0.1931 | 0.9216 | 0.9216 |
| 0.1823 | 11.35 | 4200 | 0.1926 | 0.9241 | 0.9242 |
| 0.1809 | 11.89 | 4400 | 0.1965 | 0.9243 | 0.9243 |
| 0.181 | 12.43 | 4600 | 0.1884 | 0.9246 | 0.9247 |
| 0.1852 | 12.97 | 4800 | 0.1896 | 0.9231 | 0.9231 |
| 0.1781 | 13.51 | 5000 | 0.1869 | 0.9240 | 0.9240 |
| 0.1809 | 14.05 | 5200 | 0.1890 | 0.9250 | 0.925 |
| 0.1774 | 14.59 | 5400 | 0.1824 | 0.9274 | 0.9274 |
| 0.175 | 15.14 | 5600 | 0.1836 | 0.9277 | 0.9277 |
| 0.1751 | 15.68 | 5800 | 0.1880 | 0.9234 | 0.9235 |
| 0.1772 | 16.22 | 6000 | 0.1806 | 0.9289 | 0.9289 |
| 0.1753 | 16.76 | 6200 | 0.1841 | 0.9272 | 0.9272 |
| 0.1751 | 17.3 | 6400 | 0.1916 | 0.9227 | 0.9228 |
| 0.1756 | 17.84 | 6600 | 0.1861 | 0.9248 | 0.9248 |
| 0.1682 | 18.38 | 6800 | 0.1863 | 0.9256 | 0.9257 |
| 0.1768 | 18.92 | 7000 | 0.1835 | 0.9253 | 0.9253 |
| 0.1723 | 19.46 | 7200 | 0.1886 | 0.9260 | 0.9260 |
| 0.1714 | 20.0 | 7400 | 0.1823 | 0.9279 | 0.9279 |
| 0.1713 | 20.54 | 7600 | 0.1845 | 0.9268 | 0.9269 |
| 0.169 | 21.08 | 7800 | 0.1821 | 0.9275 | 0.9275 |
| 0.1711 | 21.62 | 8000 | 0.1827 | 0.9267 | 0.9267 |
| 0.169 | 22.16 | 8200 | 0.1834 | 0.9282 | 0.9282 |
| 0.1667 | 22.7 | 8400 | 0.1854 | 0.9267 | 0.9267 |
| 0.1698 | 23.24 | 8600 | 0.1871 | 0.9267 | 0.9267 |
| 0.1669 | 23.78 | 8800 | 0.1821 | 0.9272 | 0.9272 |
| 0.1697 | 24.32 | 9000 | 0.1872 | 0.9260 | 0.9260 |
| 0.1653 | 24.86 | 9200 | 0.1853 | 0.9268 | 0.9269 |
| 0.1681 | 25.41 | 9400 | 0.1871 | 0.9263 | 0.9264 |
| 0.1668 | 25.95 | 9600 | 0.1848 | 0.9280 | 0.9280 |
| 0.1661 | 26.49 | 9800 | 0.1822 | 0.9279 | 0.9279 |
| 0.1645 | 27.03 | 10000 | 0.1839 | 0.9275 | 0.9275 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:04:02+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_46M-L1\_f
========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2014
* F1 Score: 0.9186
* Accuracy: 0.9186
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MentalRoberta-GPT2_EmpAI_EOSbajo
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 8000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.3476 | 0.9999 | 5998 | 0.3400 |
| 0.2941 | 2.0 | 11997 | 0.3041 |
| 0.2917 | 2.9997 | 17994 | 0.2970 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2", "model-index": [{"name": "MentalRoberta-GPT2_EmpAI_EOSbajo", "results": []}]} | LuangMV97/MentalRoberta-GPT2_EmpAI_EOSbajo | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:05:09+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| MentalRoberta-GPT2\_EmpAI\_EOSbajo
==================================
This model is a fine-tuned version of openai-community/gpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2970
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 8
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 8000
* num\_epochs: 3
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 8000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 8000\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/w6doc6r | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:06:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/umk17z9 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:06:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/kgk4u66 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:06:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/tn3hfw0 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:06:24+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/9m47dti | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:06:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | pruning/dj4d7mu | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:06:25+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | {"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-7b-hf", "model-index": [{"name": "code-llama-7b-text-to-sql", "results": []}]} | nan-hu-earnin/code-llama-7b-text-to-sql | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-04-26T20:07:06+00:00 | [] | [] | TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us
|
# code-llama-7b-text-to-sql
This model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 | [
"# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] | [
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-codellama/CodeLlama-7b-hf #license-llama2 #region-us \n",
"# code-llama-7b-text-to-sql\n\nThis model is a fine-tuned version of codellama/CodeLlama-7b-hf on the generator dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 6\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2"
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_new_lm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6897 | 1.0 | 1288 | 3.5826 |
| 3.5716 | 2.0 | 2576 | 3.5785 |
| 3.5233 | 3.0 | 3864 | 3.5791 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_new_lm", "results": []}]} | ljgries/my_awesome_new_lm | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:07:55+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_new\_lm
====================
This model is a fine-tuned version of gpt2 on the eli5\_category dataset.
It achieves the following results on the evaluation set:
* Loss: 3.5791
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
text-generation | transformers | <div align="center">
# TinyLlama-1.1B
</div>
We used this version of TinyLlama as a base model:
https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0
The goal was to improve performance on basic algebra (i.e. solving systems of linear equations).
The base model was fine tuned on 8k rows synthetic solution data generated by [OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf) on [ALG-514](https://paperswithcode.com/sota/math-word-problem-solving-on-alg514).
We used the [NeMo Skills](https://github.com/Kipok/NeMo-Skills) pipeline for inference with code execution and generating the synthetic data. HuggingFace's SFTTrainer was used for fine tuning, as the NeMo Skills pipeline is a buggy mess. It took 30 minutes to fine tune on an RTX3090.
Notes from previous model cards:
> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### Eval


Note that `checkpoint_0` is the base model and `checkpoint_mistral` is OpenMath-Mistral-7B-v0.1-hf.
The performance is _not good_™, but this model could be used to quickly generate synthetic data, as the coverage is decent for this dataset. The uploaded model is checkpoint-2.6k.
People involved in creating this fine tune:
- Coulton Theuer [[email protected]]
- Bret Ellenbogen [[email protected]]
- Victoria Chang [[email protected]] | {"language": ["en"], "license": "apache-2.0"} | theuerc/TinyLlama-1.1B-Chat-Math-v1.0 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:08:33+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| <div align="center">
# TinyLlama-1.1B
</div>
We used this version of TinyLlama as a base model:
URL
The goal was to improve performance on basic algebra (i.e. solving systems of linear equations).
The base model was fine tuned on 8k rows synthetic solution data generated by OpenMath-Mistral-7B-v0.1-hf on ALG-514.
We used the NeMo Skills pipeline for inference with code execution and generating the synthetic data. HuggingFace's SFTTrainer was used for fine tuning, as the NeMo Skills pipeline is a buggy mess. It took 30 minutes to fine tune on an RTX3090.
Notes from previous model cards:
> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### Eval
!image/png
!image/png
Note that 'checkpoint_0' is the base model and 'checkpoint_mistral' is OpenMath-Mistral-7B-v0.1-hf.
The performance is _not good_™, but this model could be used to quickly generate synthetic data, as the coverage is decent for this dataset. The uploaded model is checkpoint-2.6k.
People involved in creating this fine tune:
- Coulton Theuer [theuerc@URL]
- Bret Ellenbogen [bretelle@URL]
- Victoria Chang [vgc@URL] | [
"# TinyLlama-1.1B\n</div>\n\nWe used this version of TinyLlama as a base model:\nURL\n\nThe goal was to improve performance on basic algebra (i.e. solving systems of linear equations).\n\nThe base model was fine tuned on 8k rows synthetic solution data generated by OpenMath-Mistral-7B-v0.1-hf on ALG-514.\n\nWe used the NeMo Skills pipeline for inference with code execution and generating the synthetic data. HuggingFace's SFTTrainer was used for fine tuning, as the NeMo Skills pipeline is a buggy mess. It took 30 minutes to fine tune on an RTX3090.\n\n\nNotes from previous model cards:\n> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.",
"#### Eval\n\n!image/png\n\n!image/png\n\nNote that 'checkpoint_0' is the base model and 'checkpoint_mistral' is OpenMath-Mistral-7B-v0.1-hf.\n\nThe performance is _not good_™, but this model could be used to quickly generate synthetic data, as the coverage is decent for this dataset. The uploaded model is checkpoint-2.6k.\n\n\nPeople involved in creating this fine tune: \n- Coulton Theuer [theuerc@URL]\n- Bret Ellenbogen [bretelle@URL]\n- Victoria Chang [vgc@URL]"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# TinyLlama-1.1B\n</div>\n\nWe used this version of TinyLlama as a base model:\nURL\n\nThe goal was to improve performance on basic algebra (i.e. solving systems of linear equations).\n\nThe base model was fine tuned on 8k rows synthetic solution data generated by OpenMath-Mistral-7B-v0.1-hf on ALG-514.\n\nWe used the NeMo Skills pipeline for inference with code execution and generating the synthetic data. HuggingFace's SFTTrainer was used for fine tuning, as the NeMo Skills pipeline is a buggy mess. It took 30 minutes to fine tune on an RTX3090.\n\n\nNotes from previous model cards:\n> We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.",
"#### Eval\n\n!image/png\n\n!image/png\n\nNote that 'checkpoint_0' is the base model and 'checkpoint_mistral' is OpenMath-Mistral-7B-v0.1-hf.\n\nThe performance is _not good_™, but this model could be used to quickly generate synthetic data, as the coverage is decent for this dataset. The uploaded model is checkpoint-2.6k.\n\n\nPeople involved in creating this fine tune: \n- Coulton Theuer [theuerc@URL]\n- Bret Ellenbogen [bretelle@URL]\n- Victoria Chang [vgc@URL]"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1975
- F1 Score: 0.9246
- Accuracy: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.3196 | 0.54 | 200 | 0.2469 | 0.9029 | 0.9029 |
| 0.2439 | 1.08 | 400 | 0.2205 | 0.9132 | 0.9132 |
| 0.2294 | 1.62 | 600 | 0.2277 | 0.9057 | 0.9057 |
| 0.2141 | 2.16 | 800 | 0.2027 | 0.9209 | 0.9209 |
| 0.2083 | 2.7 | 1000 | 0.2019 | 0.9179 | 0.9179 |
| 0.2013 | 3.24 | 1200 | 0.1946 | 0.9221 | 0.9221 |
| 0.1984 | 3.78 | 1400 | 0.1923 | 0.9223 | 0.9223 |
| 0.194 | 4.32 | 1600 | 0.1892 | 0.9240 | 0.9240 |
| 0.1855 | 4.86 | 1800 | 0.2015 | 0.9202 | 0.9203 |
| 0.1829 | 5.41 | 2000 | 0.2014 | 0.9214 | 0.9215 |
| 0.1822 | 5.95 | 2200 | 0.1839 | 0.9243 | 0.9243 |
| 0.175 | 6.49 | 2400 | 0.1885 | 0.9207 | 0.9208 |
| 0.1776 | 7.03 | 2600 | 0.1802 | 0.9279 | 0.9279 |
| 0.1678 | 7.57 | 2800 | 0.1878 | 0.9214 | 0.9215 |
| 0.1716 | 8.11 | 3000 | 0.1953 | 0.9190 | 0.9191 |
| 0.1665 | 8.65 | 3200 | 0.1772 | 0.9233 | 0.9233 |
| 0.1611 | 9.19 | 3400 | 0.1872 | 0.9239 | 0.9240 |
| 0.1603 | 9.73 | 3600 | 0.1788 | 0.9273 | 0.9274 |
| 0.1684 | 10.27 | 3800 | 0.1743 | 0.9284 | 0.9284 |
| 0.1605 | 10.81 | 4000 | 0.1840 | 0.9246 | 0.9247 |
| 0.1575 | 11.35 | 4200 | 0.1830 | 0.9273 | 0.9274 |
| 0.155 | 11.89 | 4400 | 0.1807 | 0.9265 | 0.9265 |
| 0.153 | 12.43 | 4600 | 0.1738 | 0.9295 | 0.9296 |
| 0.1549 | 12.97 | 4800 | 0.1693 | 0.9309 | 0.9309 |
| 0.1497 | 13.51 | 5000 | 0.1735 | 0.9295 | 0.9296 |
| 0.1495 | 14.05 | 5200 | 0.1743 | 0.9299 | 0.9299 |
| 0.1476 | 14.59 | 5400 | 0.1701 | 0.9324 | 0.9324 |
| 0.1461 | 15.14 | 5600 | 0.1743 | 0.9307 | 0.9307 |
| 0.1451 | 15.68 | 5800 | 0.1791 | 0.9280 | 0.9280 |
| 0.1456 | 16.22 | 6000 | 0.1684 | 0.9326 | 0.9326 |
| 0.1436 | 16.76 | 6200 | 0.1755 | 0.9314 | 0.9314 |
| 0.1431 | 17.3 | 6400 | 0.1776 | 0.9297 | 0.9297 |
| 0.1452 | 17.84 | 6600 | 0.1743 | 0.9312 | 0.9313 |
| 0.1373 | 18.38 | 6800 | 0.1736 | 0.9317 | 0.9318 |
| 0.1419 | 18.92 | 7000 | 0.1694 | 0.9334 | 0.9334 |
| 0.1397 | 19.46 | 7200 | 0.1785 | 0.9304 | 0.9304 |
| 0.1366 | 20.0 | 7400 | 0.1709 | 0.9326 | 0.9326 |
| 0.1362 | 20.54 | 7600 | 0.1760 | 0.9314 | 0.9314 |
| 0.1362 | 21.08 | 7800 | 0.1705 | 0.9333 | 0.9333 |
| 0.1351 | 21.62 | 8000 | 0.1751 | 0.9324 | 0.9324 |
| 0.1337 | 22.16 | 8200 | 0.1726 | 0.9317 | 0.9318 |
| 0.1303 | 22.7 | 8400 | 0.1738 | 0.9317 | 0.9318 |
| 0.1305 | 23.24 | 8600 | 0.1715 | 0.9341 | 0.9341 |
| 0.1323 | 23.78 | 8800 | 0.1717 | 0.9329 | 0.9329 |
| 0.1312 | 24.32 | 9000 | 0.1781 | 0.9305 | 0.9306 |
| 0.1328 | 24.86 | 9200 | 0.1716 | 0.9339 | 0.9340 |
| 0.1294 | 25.41 | 9400 | 0.1773 | 0.9305 | 0.9306 |
| 0.1321 | 25.95 | 9600 | 0.1749 | 0.9317 | 0.9318 |
| 0.1293 | 26.49 | 9800 | 0.1729 | 0.9327 | 0.9328 |
| 0.1284 | 27.03 | 10000 | 0.1738 | 0.9321 | 0.9321 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:08:57+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_46M-L8\_f
========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1975
* F1 Score: 0.9246
* Accuracy: 0.9247
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1915
- F1 Score: 0.9236
- Accuracy: 0.9236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.2961 | 0.54 | 200 | 0.2382 | 0.9083 | 0.9083 |
| 0.2282 | 1.08 | 400 | 0.2118 | 0.9167 | 0.9167 |
| 0.2166 | 1.62 | 600 | 0.2036 | 0.9191 | 0.9191 |
| 0.1989 | 2.16 | 800 | 0.1856 | 0.9255 | 0.9255 |
| 0.1936 | 2.7 | 1000 | 0.1904 | 0.9202 | 0.9203 |
| 0.1846 | 3.24 | 1200 | 0.1763 | 0.9304 | 0.9304 |
| 0.1803 | 3.78 | 1400 | 0.1784 | 0.9246 | 0.9247 |
| 0.1757 | 4.32 | 1600 | 0.1797 | 0.9268 | 0.9269 |
| 0.1685 | 4.86 | 1800 | 0.1864 | 0.9212 | 0.9213 |
| 0.1645 | 5.41 | 2000 | 0.1825 | 0.9244 | 0.9245 |
| 0.1649 | 5.95 | 2200 | 0.1713 | 0.9287 | 0.9287 |
| 0.1554 | 6.49 | 2400 | 0.1837 | 0.9203 | 0.9204 |
| 0.1596 | 7.03 | 2600 | 0.1654 | 0.9326 | 0.9326 |
| 0.149 | 7.57 | 2800 | 0.1679 | 0.9283 | 0.9284 |
| 0.1494 | 8.11 | 3000 | 0.1779 | 0.9294 | 0.9294 |
| 0.1467 | 8.65 | 3200 | 0.1604 | 0.9336 | 0.9336 |
| 0.1393 | 9.19 | 3400 | 0.1757 | 0.9266 | 0.9267 |
| 0.1376 | 9.73 | 3600 | 0.1739 | 0.9297 | 0.9297 |
| 0.1451 | 10.27 | 3800 | 0.1667 | 0.9304 | 0.9304 |
| 0.1377 | 10.81 | 4000 | 0.1770 | 0.9295 | 0.9296 |
| 0.1322 | 11.35 | 4200 | 0.1680 | 0.9324 | 0.9324 |
| 0.1312 | 11.89 | 4400 | 0.1736 | 0.9317 | 0.9318 |
| 0.1284 | 12.43 | 4600 | 0.1709 | 0.9322 | 0.9323 |
| 0.1284 | 12.97 | 4800 | 0.1614 | 0.9351 | 0.9351 |
| 0.1227 | 13.51 | 5000 | 0.1781 | 0.9285 | 0.9285 |
| 0.1231 | 14.05 | 5200 | 0.1704 | 0.9305 | 0.9306 |
| 0.1182 | 14.59 | 5400 | 0.1663 | 0.9341 | 0.9341 |
| 0.1185 | 15.14 | 5600 | 0.1773 | 0.9281 | 0.9282 |
| 0.1149 | 15.68 | 5800 | 0.1893 | 0.9298 | 0.9299 |
| 0.116 | 16.22 | 6000 | 0.1670 | 0.9350 | 0.9350 |
| 0.1114 | 16.76 | 6200 | 0.1812 | 0.9303 | 0.9304 |
| 0.1122 | 17.3 | 6400 | 0.1755 | 0.9303 | 0.9304 |
| 0.1115 | 17.84 | 6600 | 0.1776 | 0.9307 | 0.9307 |
| 0.1043 | 18.38 | 6800 | 0.1797 | 0.9329 | 0.9329 |
| 0.1069 | 18.92 | 7000 | 0.1718 | 0.9339 | 0.9340 |
| 0.1046 | 19.46 | 7200 | 0.1826 | 0.9319 | 0.9319 |
| 0.1016 | 20.0 | 7400 | 0.1825 | 0.9319 | 0.9319 |
| 0.1013 | 20.54 | 7600 | 0.1754 | 0.9331 | 0.9331 |
| 0.1019 | 21.08 | 7800 | 0.1747 | 0.9341 | 0.9341 |
| 0.0981 | 21.62 | 8000 | 0.1815 | 0.9322 | 0.9323 |
| 0.0977 | 22.16 | 8200 | 0.1801 | 0.9331 | 0.9331 |
| 0.0926 | 22.7 | 8400 | 0.1866 | 0.9326 | 0.9326 |
| 0.095 | 23.24 | 8600 | 0.1792 | 0.9350 | 0.9350 |
| 0.0926 | 23.78 | 8800 | 0.1840 | 0.9336 | 0.9336 |
| 0.0926 | 24.32 | 9000 | 0.1880 | 0.9324 | 0.9324 |
| 0.0911 | 24.86 | 9200 | 0.1821 | 0.9346 | 0.9346 |
| 0.088 | 25.41 | 9400 | 0.1905 | 0.9329 | 0.9329 |
| 0.0924 | 25.95 | 9600 | 0.1874 | 0.9321 | 0.9321 |
| 0.0899 | 26.49 | 9800 | 0.1867 | 0.9327 | 0.9328 |
| 0.0878 | 27.03 | 10000 | 0.1868 | 0.9322 | 0.9323 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:09:05+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_prom\_prom\_300\_all-seqsight\_4096\_512\_46M-L32\_f
=========================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1915
* F1 Score: 0.9236
* Accuracy: 0.9236
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4728
- F1 Score: 0.7696
- Accuracy: 0.7691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5656 | 0.97 | 200 | 0.5316 | 0.7443 | 0.7425 |
| 0.524 | 1.93 | 400 | 0.5002 | 0.7667 | 0.7655 |
| 0.5078 | 2.9 | 600 | 0.5143 | 0.7621 | 0.7607 |
| 0.5036 | 3.86 | 800 | 0.4868 | 0.7737 | 0.7737 |
| 0.4995 | 4.83 | 1000 | 0.5012 | 0.7639 | 0.7625 |
| 0.4916 | 5.8 | 1200 | 0.4945 | 0.7695 | 0.7679 |
| 0.4904 | 6.76 | 1400 | 0.4811 | 0.7734 | 0.7719 |
| 0.4851 | 7.73 | 1600 | 0.4808 | 0.7789 | 0.7776 |
| 0.4825 | 8.7 | 1800 | 0.4791 | 0.7782 | 0.7767 |
| 0.4775 | 9.66 | 2000 | 0.4805 | 0.7776 | 0.7761 |
| 0.483 | 10.63 | 2200 | 0.4928 | 0.7712 | 0.7697 |
| 0.4717 | 11.59 | 2400 | 0.4730 | 0.7798 | 0.7785 |
| 0.4769 | 12.56 | 2600 | 0.4842 | 0.7779 | 0.7764 |
| 0.4716 | 13.53 | 2800 | 0.4786 | 0.7746 | 0.7731 |
| 0.4732 | 14.49 | 3000 | 0.4813 | 0.7731 | 0.7716 |
| 0.4646 | 15.46 | 3200 | 0.4749 | 0.7800 | 0.7788 |
| 0.4703 | 16.43 | 3400 | 0.4835 | 0.7749 | 0.7734 |
| 0.4664 | 17.39 | 3600 | 0.4700 | 0.7805 | 0.7794 |
| 0.4645 | 18.36 | 3800 | 0.4715 | 0.7784 | 0.7785 |
| 0.4633 | 19.32 | 4000 | 0.4713 | 0.7847 | 0.7834 |
| 0.4597 | 20.29 | 4200 | 0.4761 | 0.7812 | 0.7800 |
| 0.462 | 21.26 | 4400 | 0.4740 | 0.7863 | 0.7849 |
| 0.4596 | 22.22 | 4600 | 0.4812 | 0.7785 | 0.7770 |
| 0.4597 | 23.19 | 4800 | 0.4693 | 0.7873 | 0.7861 |
| 0.459 | 24.15 | 5000 | 0.4714 | 0.7849 | 0.7837 |
| 0.4567 | 25.12 | 5200 | 0.4793 | 0.7812 | 0.7797 |
| 0.4551 | 26.09 | 5400 | 0.4692 | 0.7857 | 0.7843 |
| 0.4562 | 27.05 | 5600 | 0.4773 | 0.7833 | 0.7818 |
| 0.4522 | 28.02 | 5800 | 0.4689 | 0.7874 | 0.7864 |
| 0.4536 | 28.99 | 6000 | 0.4760 | 0.7824 | 0.7809 |
| 0.4521 | 29.95 | 6200 | 0.4757 | 0.7851 | 0.7837 |
| 0.4519 | 30.92 | 6400 | 0.4663 | 0.7874 | 0.7867 |
| 0.4512 | 31.88 | 6600 | 0.4705 | 0.7877 | 0.7864 |
| 0.4482 | 32.85 | 6800 | 0.4728 | 0.7890 | 0.7876 |
| 0.451 | 33.82 | 7000 | 0.4766 | 0.7830 | 0.7815 |
| 0.4483 | 34.78 | 7200 | 0.4779 | 0.7837 | 0.7821 |
| 0.4497 | 35.75 | 7400 | 0.4693 | 0.7879 | 0.7867 |
| 0.4466 | 36.71 | 7600 | 0.4706 | 0.7886 | 0.7873 |
| 0.4458 | 37.68 | 7800 | 0.4705 | 0.7880 | 0.7867 |
| 0.4459 | 38.65 | 8000 | 0.4674 | 0.7897 | 0.7888 |
| 0.4516 | 39.61 | 8200 | 0.4700 | 0.7889 | 0.7876 |
| 0.4434 | 40.58 | 8400 | 0.4737 | 0.7866 | 0.7852 |
| 0.4459 | 41.55 | 8600 | 0.4691 | 0.7882 | 0.7870 |
| 0.4429 | 42.51 | 8800 | 0.4703 | 0.7895 | 0.7882 |
| 0.4472 | 43.48 | 9000 | 0.4776 | 0.7878 | 0.7864 |
| 0.4449 | 44.44 | 9200 | 0.4728 | 0.7878 | 0.7864 |
| 0.4463 | 45.41 | 9400 | 0.4711 | 0.7874 | 0.7861 |
| 0.4401 | 46.38 | 9600 | 0.4737 | 0.7881 | 0.7867 |
| 0.4472 | 47.34 | 9800 | 0.4711 | 0.7871 | 0.7858 |
| 0.445 | 48.31 | 10000 | 0.4707 | 0.7886 | 0.7873 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:11:15+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_46M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4728
* F1 Score: 0.7696
* Accuracy: 0.7691
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4760
- F1 Score: 0.7749
- Accuracy: 0.7746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5506 | 0.97 | 200 | 0.5042 | 0.7665 | 0.7649 |
| 0.5061 | 1.93 | 400 | 0.4880 | 0.7721 | 0.7707 |
| 0.4929 | 2.9 | 600 | 0.5016 | 0.7645 | 0.7631 |
| 0.4883 | 3.86 | 800 | 0.4736 | 0.7791 | 0.7785 |
| 0.4837 | 4.83 | 1000 | 0.5029 | 0.7614 | 0.7604 |
| 0.4758 | 5.8 | 1200 | 0.4965 | 0.7623 | 0.7610 |
| 0.4725 | 6.76 | 1400 | 0.4706 | 0.7836 | 0.7821 |
| 0.4665 | 7.73 | 1600 | 0.4736 | 0.7857 | 0.7843 |
| 0.4634 | 8.7 | 1800 | 0.4804 | 0.7809 | 0.7794 |
| 0.4562 | 9.66 | 2000 | 0.4784 | 0.7785 | 0.7770 |
| 0.4592 | 10.63 | 2200 | 0.4830 | 0.7806 | 0.7791 |
| 0.4498 | 11.59 | 2400 | 0.4708 | 0.7844 | 0.7831 |
| 0.4515 | 12.56 | 2600 | 0.4800 | 0.7815 | 0.7800 |
| 0.445 | 13.53 | 2800 | 0.4796 | 0.7728 | 0.7713 |
| 0.4446 | 14.49 | 3000 | 0.4770 | 0.7803 | 0.7788 |
| 0.4338 | 15.46 | 3200 | 0.4799 | 0.7835 | 0.7825 |
| 0.4396 | 16.43 | 3400 | 0.4798 | 0.7797 | 0.7782 |
| 0.4335 | 17.39 | 3600 | 0.4743 | 0.7841 | 0.7828 |
| 0.429 | 18.36 | 3800 | 0.4714 | 0.7858 | 0.7858 |
| 0.4269 | 19.32 | 4000 | 0.4705 | 0.7920 | 0.7912 |
| 0.4222 | 20.29 | 4200 | 0.4872 | 0.7809 | 0.7800 |
| 0.426 | 21.26 | 4400 | 0.4792 | 0.7833 | 0.7818 |
| 0.4192 | 22.22 | 4600 | 0.4964 | 0.7758 | 0.7743 |
| 0.418 | 23.19 | 4800 | 0.4780 | 0.7823 | 0.7812 |
| 0.4172 | 24.15 | 5000 | 0.4955 | 0.7748 | 0.7734 |
| 0.4118 | 25.12 | 5200 | 0.5083 | 0.7752 | 0.7737 |
| 0.4093 | 26.09 | 5400 | 0.4897 | 0.7761 | 0.7746 |
| 0.4119 | 27.05 | 5600 | 0.5046 | 0.7707 | 0.7691 |
| 0.4055 | 28.02 | 5800 | 0.4882 | 0.7847 | 0.7834 |
| 0.405 | 28.99 | 6000 | 0.4886 | 0.7788 | 0.7773 |
| 0.4024 | 29.95 | 6200 | 0.4903 | 0.7714 | 0.7700 |
| 0.4001 | 30.92 | 6400 | 0.4825 | 0.7804 | 0.7803 |
| 0.3992 | 31.88 | 6600 | 0.4916 | 0.7755 | 0.7746 |
| 0.3932 | 32.85 | 6800 | 0.5003 | 0.7751 | 0.7737 |
| 0.3965 | 33.82 | 7000 | 0.5031 | 0.7695 | 0.7679 |
| 0.3912 | 34.78 | 7200 | 0.5025 | 0.7734 | 0.7719 |
| 0.3922 | 35.75 | 7400 | 0.4921 | 0.7713 | 0.7700 |
| 0.3893 | 36.71 | 7600 | 0.4995 | 0.7765 | 0.7752 |
| 0.386 | 37.68 | 7800 | 0.5018 | 0.7730 | 0.7716 |
| 0.3874 | 38.65 | 8000 | 0.5012 | 0.7749 | 0.7737 |
| 0.3909 | 39.61 | 8200 | 0.4984 | 0.7721 | 0.7710 |
| 0.382 | 40.58 | 8400 | 0.5084 | 0.7713 | 0.7697 |
| 0.3837 | 41.55 | 8600 | 0.5034 | 0.7743 | 0.7731 |
| 0.3819 | 42.51 | 8800 | 0.5033 | 0.7757 | 0.7746 |
| 0.3829 | 43.48 | 9000 | 0.5079 | 0.7757 | 0.7743 |
| 0.381 | 44.44 | 9200 | 0.5102 | 0.7727 | 0.7713 |
| 0.3843 | 45.41 | 9400 | 0.5049 | 0.7747 | 0.7734 |
| 0.376 | 46.38 | 9600 | 0.5101 | 0.7730 | 0.7716 |
| 0.3797 | 47.34 | 9800 | 0.5075 | 0.7729 | 0.7716 |
| 0.3789 | 48.31 | 10000 | 0.5064 | 0.7740 | 0.7728 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:13:41+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_46M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4760
* F1 Score: 0.7749
* Accuracy: 0.7746
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | AmrMohamed/Meta-Llama-3-8B-grading | null | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:14:26+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #pytorch #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #pytorch #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# outputs
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"license": "mit", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "outputs", "results": []}]} | AK232003/outputs | null | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:gpt2",
"license:mit",
"region:us"
] | null | 2024-04-26T20:14:54+00:00 | [] | [] | TAGS
#peft #safetensors #trl #reward-trainer #generated_from_trainer #base_model-gpt2 #license-mit #region-us
|
# outputs
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | [
"# outputs\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#peft #safetensors #trl #reward-trainer #generated_from_trainer #base_model-gpt2 #license-mit #region-us \n",
"# outputs\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 4\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.3.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0623
- Precision: 0.9346
- Recall: 0.9505
- F1: 0.9425
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0777 | 1.0 | 1756 | 0.0721 | 0.9129 | 0.9325 | 0.9226 | 0.9814 |
| 0.036 | 2.0 | 3512 | 0.0604 | 0.9309 | 0.9477 | 0.9392 | 0.9859 |
| 0.0186 | 3.0 | 5268 | 0.0623 | 0.9346 | 0.9505 | 0.9425 | 0.9864 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.1+cpu
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "bert-finetuned-ner", "results": []}]} | HFDON/bert-finetuned-ner | null | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:15:58+00:00 | [] | [] | TAGS
#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| bert-finetuned-ner
==================
This model is a fine-tuned version of bert-base-cased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0623
* Precision: 0.9346
* Recall: 0.9505
* F1: 0.9425
* Accuracy: 0.9864
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.37.2
* Pytorch 2.2.1+cpu
* Datasets 2.19.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.1+cpu\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.1+cpu\n* Datasets 2.19.0\n* Tokenizers 0.15.2"
] |
null | null |
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 25vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: "lucida sans unicode", "lucida grande", sans-serif;
font-style: italic;
font-weight: bold;
font-variant: small-caps;
letter-spacing: 0.05em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #99E2FF, #FF5C95);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 0em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 5px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 5px;
transition: transform .7s;
/* Smooth transition for the container */
}
.custom-image-container:hover {
transform: scale(1.05);
/* Scale the container on hover */
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.custom-button {
width: auto;
height: 20px;
object-fit: cover;
border-radius: 5px;
transition: transform .7s;
margin-bottom: 0em;
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
# Trails/Kiseki series - 192 characters/outfits
## __Models created by user [irrel@Civitai](https://civitai.com/user/irrel).__
## __Originally published [here](https://civitai.com/models/417827).__
This is a set of 192 characters/outfits from the Trails series for use with Pony Diffusion V6 XL or any of its derivatives. I chose to separate them into individual loras for improved accuracy.\
The .zip file contains:
* 192 lora files (~20MB each)
* A folder with 2 text files containing example prompts for each character
* kiseki.txt (characters' physical features and outfit)
* kiseki-noclothes.txt (characters' physical features only)\
The models should work with most style loras. They are also flexible enough to support simple custom outfits. Some of the more obscure characters might appear a bit stiff as they had to be trained entirely on 3D renders.
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/5e2f634f-f54d-4506-bcdb-046ed72bf9de/original=true/02540-8.jpeg" alt="sample2">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/4fa84748-8c7f-4ebf-9ce0-47c19724e13c/original=true/02222-1349480391.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ebc1f9b4-8baa-493d-98b0-b62a585b47b1/original=true/02381-3917646338.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ceb352de-d566-4e9c-b1d4-a9e3630974ca/original=true/01617-206421237.jpeg" alt="sample1">
</div>
</td>
</table>
<table class="custom-table">
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d6be2e40-c496-4e33-9fee-c4798da8672c/original=true/01618-1856211698.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2ba0670a-4404-48f0-8591-06f0b302c8ae/original=true/01691-382581776.jpeg" alt="sample2">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d95bcecf-47db-4779-a072-36ad28f1b9da/original=true/01242-123.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/53ad44f2-9c4d-406a-a9b6-9ea7014168c2/original=true/02308-3917646265.jpeg" alt="sample1">
</div>
</td>
</tr>
</table>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d7f62f19-8033-4a44-9160-ecb96fafad0e/original=true/01873-4122203611.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1484726c-e3ee-4eda-8f39-dccab9827c01/original=true/01712-3795625343.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ca9a0c82-ac35-4999-8c0c-8abc0a5f5453/original=true/02299-3917646256.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2a7a51c3-88e6-4a8b-a4c1-a240509acd67/original=true/01904-1427461507.jpeg" alt="sample1">
</div>
</td>
</table>
<table class="custom-table">
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1a8684cb-5fe2-4942-8247-1e71e0ac20e2/original=true/00926-3008217640.jpeg" alt="sample2">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1c34eff0-30b3-4817-a07f-be1eeabe7ba9/original=true/01530-3892649669.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/a08256a0-d68b-43b4-a02f-ccae086b591f/original=true/01528-2935231619.jpeg" alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/20d8c0cc-96cb-4ea7-94c9-a4301d21694d/original=true/01370-3301814111.jpeg" alt="sample1">
</div>
</td>
</tr>
</table>
## Included characters:
1. Ada Grant
2. Aeolia
3. Agnes Claudel (5 outfits)
4. Alfin Reise Arnor
5. Alisa Reinford (3 outfits)
6. Altera
7. Altina Orion (5 outfits)
8. Anelace Elfead
9. Angelica Rogner (2 outfits)
10. Annabelle
11. rianrhod
12. shen Lu
13. urelia Le Guin
14. urier Vander
15. Becky
16. Bennet
17. Beryl
18. Cecile Neues (2 outfits)
19. Celine Millstein
20. Celis Ortesia (2 outfits)
21. Claire Rieveldt (3 outfits)
22. Clarice Seeker
23. Diana
24. Dorothee
25. Duvalie (2 outfits)
26. Edel
27. Elaine Auclair (3 outfits)
28. Elie MacDowell (3 outfits)
29. Elise Schwarzer
30. Emily
31. Emma Millstein (3 outfits)
32. Ennea
33. Erika Russell
34. Esmeray Archette (2 outfits)
35. Estelle Bright (3 outfits)
36. Fatma Worzel
37. Feri Al-Fayed (2 outfits)
38. Ferris Florald
39. Fie Claussell (2 outfits)
40. Fiona Craig
41. Flotte
42. Fran Seekr
43. Grace Lynn
44. Grandmaster
45. Grimcats
46. Hal Coleman
47. Hermes
48. Ilya Platiere (2 outfits)
49. Ines
50. Irina Reinford
51. Jessica Schleiden
52. Jingo
53. Jolda
54. Josette Capua (2 outfits)
55. Judith Ranster (2 outfits)
56. Julia Schwar
57. Juna Crawford (3 outfits)
58. Kaela MacMillan
59. Kate (2 outfits)
60. KeA (2 outfits)
61. Kilika Rouran
62. Kisara (2 outfits)
63. Klaudia Von Auslese (3 outfits)
64. Lapis Rosenberg
65. Laura S Arseid
66. Lavian Winslet
67. Leonora
68. Lila (2 outfits)
69. Lina Crawford
70. Linde
71. Louise (2 outfits)
72. Lucrezia Isselee
73. Lucy Seiland
74. Lynn
75. Mare
76. Mariabell Crois (2 outfits)
77. Marielle Ayme
78. Maya
79. Millium Orion
80. Mint
81. Mirabel Aalton
82. Mireille
83. Misty
84. Musse Egret (3 outfits)
85. Nadia Rayne (2 outfits)
86. Naje Berca
87. Nina Fenly
88. Noel Seeker
89. Odette
90. Olympia
91. Patiry
92. Paulette
93. Priscilla Reise Arnor
94. Rebecca
95. Regina
96. Renne Bright (4 outfits)
97. Ries Argent
98. Risette Twinings (3 outfits)
99. Rixia Mao (3 outfits)
100. Rosalie
101. Roselia Millstein (2 versions)
102. Rosine
103. Saara (2 outfits)
104. Sandy
105. Sara Valestein
106. Scherazard Harvey
107. Shahina (2 outfits)
108. Shanshan
109. Sharon Kreuger (2 outfits)
110. Shino
111. Shirley Orlando (3 outfits)
112. Shizuku MacLaine (2 outfits)
113. Shizuna Rem Misurugi (2 outfits)
114. Sonya Baelz
115. Sophia Hayworth
116. Sully Atraid (2 outfits)
117. Tatiana
118. Theresia Caroline
119. Tio Plato (2 outfits)
120. Tita Russell (2 outfits)
121. Towa Herschel (3 outfits)
122. Valerie (2 outfits)
123. Viola
124. Vita Clotilde (2 outfits)
125. Vivi
126. Wendy
127. Zita Asverl | {"license": "cc-by-nc-4.0"} | hhks/kiseki_chars | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-04-26T20:17:23+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 25vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: "lucida sans unicode", "lucida grande", sans-serif;
font-style: italic;
font-weight: bold;
font-variant: small-caps;
letter-spacing: 0.05em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #99E2FF, #FF5C95);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 0em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 5px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 5px;
transition: transform .7s;
/* Smooth transition for the container */
}
.custom-image-container:hover {
transform: scale(1.05);
/* Scale the container on hover */
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.custom-button {
width: auto;
height: 20px;
object-fit: cover;
border-radius: 5px;
transition: transform .7s;
margin-bottom: 0em;
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
# Trails/Kiseki series - 192 characters/outfits
## __Models created by user irrel@Civitai.__
## __Originally published here.__
This is a set of 192 characters/outfits from the Trails series for use with Pony Diffusion V6 XL or any of its derivatives. I chose to separate them into individual loras for improved accuracy.\
The .zip file contains:
* 192 lora files (~20MB each)
* A folder with 2 text files containing example prompts for each character
* URL (characters' physical features and outfit)
* URL (characters' physical features only)\
The models should work with most style loras. They are also flexible enough to support simple custom outfits. Some of the more obscure characters might appear a bit stiff as they had to be trained entirely on 3D renders.
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample2">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
</table>
<table class="custom-table">
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample2">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
</tr>
</table>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
</table>
<table class="custom-table">
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample2">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="URL alt="sample1">
</div>
</td>
</tr>
</table>
## Included characters:
1. Ada Grant
2. Aeolia
3. Agnes Claudel (5 outfits)
4. Alfin Reise Arnor
5. Alisa Reinford (3 outfits)
6. Altera
7. Altina Orion (5 outfits)
8. Anelace Elfead
9. Angelica Rogner (2 outfits)
10. Annabelle
11. rianrhod
12. shen Lu
13. urelia Le Guin
14. urier Vander
15. Becky
16. Bennet
17. Beryl
18. Cecile Neues (2 outfits)
19. Celine Millstein
20. Celis Ortesia (2 outfits)
21. Claire Rieveldt (3 outfits)
22. Clarice Seeker
23. Diana
24. Dorothee
25. Duvalie (2 outfits)
26. Edel
27. Elaine Auclair (3 outfits)
28. Elie MacDowell (3 outfits)
29. Elise Schwarzer
30. Emily
31. Emma Millstein (3 outfits)
32. Ennea
33. Erika Russell
34. Esmeray Archette (2 outfits)
35. Estelle Bright (3 outfits)
36. Fatma Worzel
37. Feri Al-Fayed (2 outfits)
38. Ferris Florald
39. Fie Claussell (2 outfits)
40. Fiona Craig
41. Flotte
42. Fran Seekr
43. Grace Lynn
44. Grandmaster
45. Grimcats
46. Hal Coleman
47. Hermes
48. Ilya Platiere (2 outfits)
49. Ines
50. Irina Reinford
51. Jessica Schleiden
52. Jingo
53. Jolda
54. Josette Capua (2 outfits)
55. Judith Ranster (2 outfits)
56. Julia Schwar
57. Juna Crawford (3 outfits)
58. Kaela MacMillan
59. Kate (2 outfits)
60. KeA (2 outfits)
61. Kilika Rouran
62. Kisara (2 outfits)
63. Klaudia Von Auslese (3 outfits)
64. Lapis Rosenberg
65. Laura S Arseid
66. Lavian Winslet
67. Leonora
68. Lila (2 outfits)
69. Lina Crawford
70. Linde
71. Louise (2 outfits)
72. Lucrezia Isselee
73. Lucy Seiland
74. Lynn
75. Mare
76. Mariabell Crois (2 outfits)
77. Marielle Ayme
78. Maya
79. Millium Orion
80. Mint
81. Mirabel Aalton
82. Mireille
83. Misty
84. Musse Egret (3 outfits)
85. Nadia Rayne (2 outfits)
86. Naje Berca
87. Nina Fenly
88. Noel Seeker
89. Odette
90. Olympia
91. Patiry
92. Paulette
93. Priscilla Reise Arnor
94. Rebecca
95. Regina
96. Renne Bright (4 outfits)
97. Ries Argent
98. Risette Twinings (3 outfits)
99. Rixia Mao (3 outfits)
100. Rosalie
101. Roselia Millstein (2 versions)
102. Rosine
103. Saara (2 outfits)
104. Sandy
105. Sara Valestein
106. Scherazard Harvey
107. Shahina (2 outfits)
108. Shanshan
109. Sharon Kreuger (2 outfits)
110. Shino
111. Shirley Orlando (3 outfits)
112. Shizuku MacLaine (2 outfits)
113. Shizuna Rem Misurugi (2 outfits)
114. Sonya Baelz
115. Sophia Hayworth
116. Sully Atraid (2 outfits)
117. Tatiana
118. Theresia Caroline
119. Tio Plato (2 outfits)
120. Tita Russell (2 outfits)
121. Towa Herschel (3 outfits)
122. Valerie (2 outfits)
123. Viola
124. Vita Clotilde (2 outfits)
125. Vivi
126. Wendy
127. Zita Asverl | [
"# Trails/Kiseki series - 192 characters/outfits",
"## __Models created by user irrel@Civitai.__",
"## __Originally published here.__\n\nThis is a set of 192 characters/outfits from the Trails series for use with Pony Diffusion V6 XL or any of its derivatives. I chose to separate them into individual loras for improved accuracy.\\\nThe .zip file contains:\n* 192 lora files (~20MB each)\n* A folder with 2 text files containing example prompts for each character\n* URL (characters' physical features and outfit)\n* URL (characters' physical features only)\\\nThe models should work with most style loras. They are also flexible enough to support simple custom outfits. Some of the more obscure characters might appear a bit stiff as they had to be trained entirely on 3D renders.\n\n\n\n<table class=\"custom-table\">\n <tr>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample2\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n</table>\n<table class=\"custom-table\">\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample2\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n </tr>\n</table>\n<table class=\"custom-table\">\n <tr>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n</table>\n<table class=\"custom-table\">\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample2\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n </tr>\n</table>",
"## Included characters:\n\n1. Ada Grant\n2. Aeolia\n3. Agnes Claudel (5 outfits)\n4. Alfin Reise Arnor\n5. Alisa Reinford (3 outfits)\n6. Altera\n7. Altina Orion (5 outfits)\n8. Anelace Elfead\n9. Angelica Rogner (2 outfits)\n10. Annabelle\n11. rianrhod\n12. shen Lu\n13. urelia Le Guin\n14. urier Vander\n15. Becky\n16. Bennet\n17. Beryl\n18. Cecile Neues (2 outfits)\n19. Celine Millstein\n20. Celis Ortesia (2 outfits)\n21. Claire Rieveldt (3 outfits)\n22. Clarice Seeker\n23. Diana\n24. Dorothee\n25. Duvalie (2 outfits)\n26. Edel\n27. Elaine Auclair (3 outfits)\n28. Elie MacDowell (3 outfits)\n29. Elise Schwarzer\n30. Emily\n31. Emma Millstein (3 outfits)\n32. Ennea\n33. Erika Russell\n34. Esmeray Archette (2 outfits)\n35. Estelle Bright (3 outfits)\n36. Fatma Worzel\n37. Feri Al-Fayed (2 outfits)\n38. Ferris Florald\n39. Fie Claussell (2 outfits)\n40. Fiona Craig\n41. Flotte\n42. Fran Seekr\n43. Grace Lynn\n44. Grandmaster\n45. Grimcats\n46. Hal Coleman\n47. Hermes\n48. Ilya Platiere (2 outfits)\n49. Ines\n50. Irina Reinford\n51. Jessica Schleiden\n52. Jingo\n53. Jolda\n54. Josette Capua (2 outfits)\n55. Judith Ranster (2 outfits)\n56. Julia Schwar\n57. Juna Crawford (3 outfits)\n58. Kaela MacMillan\n59. Kate (2 outfits)\n60. KeA (2 outfits)\n61. Kilika Rouran\n62. Kisara (2 outfits)\n63. Klaudia Von Auslese (3 outfits)\n64. Lapis Rosenberg\n65. Laura S Arseid\n66. Lavian Winslet\n67. Leonora\n68. Lila (2 outfits)\n69. Lina Crawford\n70. Linde\n71. Louise (2 outfits)\n72. Lucrezia Isselee\n73. Lucy Seiland\n74. Lynn\n75. Mare\n76. Mariabell Crois (2 outfits)\n77. Marielle Ayme\n78. Maya\n79. Millium Orion\n80. Mint\n81. Mirabel Aalton\n82. Mireille\n83. Misty\n84. Musse Egret (3 outfits)\n85. Nadia Rayne (2 outfits)\n86. Naje Berca\n87. Nina Fenly\n88. Noel Seeker\n89. Odette\n90. Olympia\n91. Patiry\n92. Paulette\n93. Priscilla Reise Arnor\n94. Rebecca\n95. Regina\n96. Renne Bright (4 outfits)\n97. Ries Argent\n98. Risette Twinings (3 outfits)\n99. Rixia Mao (3 outfits)\n100. Rosalie\n101. Roselia Millstein (2 versions)\n102. Rosine\n103. Saara (2 outfits)\n104. Sandy\n105. Sara Valestein\n106. Scherazard Harvey\n107. Shahina (2 outfits)\n108. Shanshan\n109. Sharon Kreuger (2 outfits)\n110. Shino\n111. Shirley Orlando (3 outfits)\n112. Shizuku MacLaine (2 outfits)\n113. Shizuna Rem Misurugi (2 outfits)\n114. Sonya Baelz\n115. Sophia Hayworth\n116. Sully Atraid (2 outfits)\n117. Tatiana\n118. Theresia Caroline\n119. Tio Plato (2 outfits)\n120. Tita Russell (2 outfits)\n121. Towa Herschel (3 outfits)\n122. Valerie (2 outfits)\n123. Viola\n124. Vita Clotilde (2 outfits)\n125. Vivi\n126. Wendy\n127. Zita Asverl"
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"# Trails/Kiseki series - 192 characters/outfits",
"## __Models created by user irrel@Civitai.__",
"## __Originally published here.__\n\nThis is a set of 192 characters/outfits from the Trails series for use with Pony Diffusion V6 XL or any of its derivatives. I chose to separate them into individual loras for improved accuracy.\\\nThe .zip file contains:\n* 192 lora files (~20MB each)\n* A folder with 2 text files containing example prompts for each character\n* URL (characters' physical features and outfit)\n* URL (characters' physical features only)\\\nThe models should work with most style loras. They are also flexible enough to support simple custom outfits. Some of the more obscure characters might appear a bit stiff as they had to be trained entirely on 3D renders.\n\n\n\n<table class=\"custom-table\">\n <tr>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample2\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n</table>\n<table class=\"custom-table\">\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample2\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n </tr>\n</table>\n<table class=\"custom-table\">\n <tr>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n</table>\n<table class=\"custom-table\">\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample2\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n <td>\n <div class=\"custom-image-container\">\n <img class=\"custom-image\" src=\"URL alt=\"sample1\">\n </div>\n </td>\n </tr>\n</table>",
"## Included characters:\n\n1. Ada Grant\n2. Aeolia\n3. Agnes Claudel (5 outfits)\n4. Alfin Reise Arnor\n5. Alisa Reinford (3 outfits)\n6. Altera\n7. Altina Orion (5 outfits)\n8. Anelace Elfead\n9. Angelica Rogner (2 outfits)\n10. Annabelle\n11. rianrhod\n12. shen Lu\n13. urelia Le Guin\n14. urier Vander\n15. Becky\n16. Bennet\n17. Beryl\n18. Cecile Neues (2 outfits)\n19. Celine Millstein\n20. Celis Ortesia (2 outfits)\n21. Claire Rieveldt (3 outfits)\n22. Clarice Seeker\n23. Diana\n24. Dorothee\n25. Duvalie (2 outfits)\n26. Edel\n27. Elaine Auclair (3 outfits)\n28. Elie MacDowell (3 outfits)\n29. Elise Schwarzer\n30. Emily\n31. Emma Millstein (3 outfits)\n32. Ennea\n33. Erika Russell\n34. Esmeray Archette (2 outfits)\n35. Estelle Bright (3 outfits)\n36. Fatma Worzel\n37. Feri Al-Fayed (2 outfits)\n38. Ferris Florald\n39. Fie Claussell (2 outfits)\n40. Fiona Craig\n41. Flotte\n42. Fran Seekr\n43. Grace Lynn\n44. Grandmaster\n45. Grimcats\n46. Hal Coleman\n47. Hermes\n48. Ilya Platiere (2 outfits)\n49. Ines\n50. Irina Reinford\n51. Jessica Schleiden\n52. Jingo\n53. Jolda\n54. Josette Capua (2 outfits)\n55. Judith Ranster (2 outfits)\n56. Julia Schwar\n57. Juna Crawford (3 outfits)\n58. Kaela MacMillan\n59. Kate (2 outfits)\n60. KeA (2 outfits)\n61. Kilika Rouran\n62. Kisara (2 outfits)\n63. Klaudia Von Auslese (3 outfits)\n64. Lapis Rosenberg\n65. Laura S Arseid\n66. Lavian Winslet\n67. Leonora\n68. Lila (2 outfits)\n69. Lina Crawford\n70. Linde\n71. Louise (2 outfits)\n72. Lucrezia Isselee\n73. Lucy Seiland\n74. Lynn\n75. Mare\n76. Mariabell Crois (2 outfits)\n77. Marielle Ayme\n78. Maya\n79. Millium Orion\n80. Mint\n81. Mirabel Aalton\n82. Mireille\n83. Misty\n84. Musse Egret (3 outfits)\n85. Nadia Rayne (2 outfits)\n86. Naje Berca\n87. Nina Fenly\n88. Noel Seeker\n89. Odette\n90. Olympia\n91. Patiry\n92. Paulette\n93. Priscilla Reise Arnor\n94. Rebecca\n95. Regina\n96. Renne Bright (4 outfits)\n97. Ries Argent\n98. Risette Twinings (3 outfits)\n99. Rixia Mao (3 outfits)\n100. Rosalie\n101. Roselia Millstein (2 versions)\n102. Rosine\n103. Saara (2 outfits)\n104. Sandy\n105. Sara Valestein\n106. Scherazard Harvey\n107. Shahina (2 outfits)\n108. Shanshan\n109. Sharon Kreuger (2 outfits)\n110. Shino\n111. Shirley Orlando (3 outfits)\n112. Shizuku MacLaine (2 outfits)\n113. Shizuna Rem Misurugi (2 outfits)\n114. Sonya Baelz\n115. Sophia Hayworth\n116. Sully Atraid (2 outfits)\n117. Tatiana\n118. Theresia Caroline\n119. Tio Plato (2 outfits)\n120. Tita Russell (2 outfits)\n121. Towa Herschel (3 outfits)\n122. Valerie (2 outfits)\n123. Viola\n124. Vita Clotilde (2 outfits)\n125. Vivi\n126. Wendy\n127. Zita Asverl"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2276
- Accuracy: 0.9424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2073 | 1.0 | 1563 | 0.1896 | 0.9298 |
| 0.1448 | 2.0 | 3126 | 0.2276 | 0.9424 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "albert-base-v2", "model-index": [{"name": "my_awesome_model", "results": []}]} | BohanJiang/my_awesome_model | null | [
"transformers",
"tensorboard",
"safetensors",
"albert",
"text-classification",
"generated_from_trainer",
"base_model:albert-base-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:19:30+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #base_model-albert-base-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
| my\_awesome\_model
==================
This model is a fine-tuned version of albert-base-v2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2276
* Accuracy: 0.9424
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #albert #text-classification #generated_from_trainer #base_model-albert-base-v2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K14ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K14ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4754
- F1 Score: 0.7754
- Accuracy: 0.7746
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5415 | 0.97 | 200 | 0.4991 | 0.7599 | 0.7582 |
| 0.4981 | 1.93 | 400 | 0.4800 | 0.7757 | 0.7743 |
| 0.4837 | 2.9 | 600 | 0.4962 | 0.7656 | 0.7643 |
| 0.4772 | 3.86 | 800 | 0.4695 | 0.7792 | 0.7779 |
| 0.471 | 4.83 | 1000 | 0.5198 | 0.7598 | 0.7589 |
| 0.4605 | 5.8 | 1200 | 0.4931 | 0.7705 | 0.7691 |
| 0.4537 | 6.76 | 1400 | 0.4735 | 0.7818 | 0.7803 |
| 0.4446 | 7.73 | 1600 | 0.4716 | 0.7838 | 0.7825 |
| 0.4392 | 8.7 | 1800 | 0.4845 | 0.7800 | 0.7785 |
| 0.4285 | 9.66 | 2000 | 0.4860 | 0.7704 | 0.7688 |
| 0.427 | 10.63 | 2200 | 0.5009 | 0.7794 | 0.7779 |
| 0.4138 | 11.59 | 2400 | 0.4853 | 0.7758 | 0.7746 |
| 0.409 | 12.56 | 2600 | 0.4986 | 0.7805 | 0.7794 |
| 0.3984 | 13.53 | 2800 | 0.5008 | 0.7647 | 0.7631 |
| 0.3934 | 14.49 | 3000 | 0.5097 | 0.7713 | 0.7697 |
| 0.377 | 15.46 | 3200 | 0.5298 | 0.7762 | 0.7749 |
| 0.3789 | 16.43 | 3400 | 0.5258 | 0.7698 | 0.7682 |
| 0.3651 | 17.39 | 3600 | 0.5315 | 0.7672 | 0.7658 |
| 0.356 | 18.36 | 3800 | 0.5486 | 0.7702 | 0.7688 |
| 0.3535 | 19.32 | 4000 | 0.5380 | 0.7740 | 0.7728 |
| 0.3368 | 20.29 | 4200 | 0.5776 | 0.7764 | 0.7758 |
| 0.3397 | 21.26 | 4400 | 0.5543 | 0.7727 | 0.7713 |
| 0.3299 | 22.22 | 4600 | 0.5806 | 0.7677 | 0.7661 |
| 0.3246 | 23.19 | 4800 | 0.5656 | 0.7772 | 0.7758 |
| 0.3155 | 24.15 | 5000 | 0.6116 | 0.7749 | 0.7734 |
| 0.3081 | 25.12 | 5200 | 0.5955 | 0.7653 | 0.7637 |
| 0.3004 | 26.09 | 5400 | 0.5893 | 0.7790 | 0.7776 |
| 0.3003 | 27.05 | 5600 | 0.6006 | 0.7740 | 0.7725 |
| 0.2921 | 28.02 | 5800 | 0.6405 | 0.7692 | 0.7676 |
| 0.2845 | 28.99 | 6000 | 0.6178 | 0.7682 | 0.7667 |
| 0.2802 | 29.95 | 6200 | 0.6065 | 0.7690 | 0.7676 |
| 0.2781 | 30.92 | 6400 | 0.5852 | 0.7805 | 0.7797 |
| 0.2693 | 31.88 | 6600 | 0.6314 | 0.7724 | 0.7710 |
| 0.2647 | 32.85 | 6800 | 0.6444 | 0.7695 | 0.7679 |
| 0.2607 | 33.82 | 7000 | 0.6346 | 0.7745 | 0.7731 |
| 0.2542 | 34.78 | 7200 | 0.6513 | 0.7682 | 0.7667 |
| 0.257 | 35.75 | 7400 | 0.6532 | 0.7611 | 0.7595 |
| 0.2466 | 36.71 | 7600 | 0.6450 | 0.7733 | 0.7725 |
| 0.2456 | 37.68 | 7800 | 0.6273 | 0.7704 | 0.7691 |
| 0.2411 | 38.65 | 8000 | 0.6753 | 0.7705 | 0.7691 |
| 0.2438 | 39.61 | 8200 | 0.6777 | 0.7700 | 0.7688 |
| 0.2326 | 40.58 | 8400 | 0.6991 | 0.7704 | 0.7688 |
| 0.2391 | 41.55 | 8600 | 0.6810 | 0.7670 | 0.7655 |
| 0.2335 | 42.51 | 8800 | 0.6759 | 0.7719 | 0.7707 |
| 0.231 | 43.48 | 9000 | 0.6950 | 0.7715 | 0.7700 |
| 0.2292 | 44.44 | 9200 | 0.6988 | 0.7682 | 0.7667 |
| 0.2291 | 45.41 | 9400 | 0.6996 | 0.7682 | 0.7667 |
| 0.2188 | 46.38 | 9600 | 0.7126 | 0.7703 | 0.7688 |
| 0.2218 | 47.34 | 9800 | 0.7034 | 0.7696 | 0.7682 |
| 0.2218 | 48.31 | 10000 | 0.7038 | 0.7705 | 0.7691 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K14ac-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:20:37+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K14ac-seqsight\_4096\_512\_46M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K14ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4754
* F1 Score: 0.7754
* Accuracy: 0.7746
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5833
- F1 Score: 0.6924
- Accuracy: 0.6966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6394 | 1.04 | 200 | 0.6318 | 0.5864 | 0.6422 |
| 0.6149 | 2.08 | 400 | 0.6151 | 0.6606 | 0.6631 |
| 0.6069 | 3.12 | 600 | 0.6077 | 0.6453 | 0.6693 |
| 0.6023 | 4.17 | 800 | 0.6002 | 0.6715 | 0.6748 |
| 0.5966 | 5.21 | 1000 | 0.6032 | 0.6610 | 0.6725 |
| 0.5931 | 6.25 | 1200 | 0.5932 | 0.6712 | 0.6833 |
| 0.5879 | 7.29 | 1400 | 0.5943 | 0.6770 | 0.6781 |
| 0.5854 | 8.33 | 1600 | 0.5951 | 0.6791 | 0.6813 |
| 0.5836 | 9.38 | 1800 | 0.5983 | 0.6824 | 0.6810 |
| 0.5814 | 10.42 | 2000 | 0.5874 | 0.6765 | 0.6846 |
| 0.582 | 11.46 | 2200 | 0.5864 | 0.6747 | 0.6882 |
| 0.572 | 12.5 | 2400 | 0.5876 | 0.6792 | 0.6862 |
| 0.5725 | 13.54 | 2600 | 0.5870 | 0.6763 | 0.6859 |
| 0.5749 | 14.58 | 2800 | 0.5877 | 0.6759 | 0.6872 |
| 0.5739 | 15.62 | 3000 | 0.5879 | 0.6828 | 0.6878 |
| 0.5675 | 16.67 | 3200 | 0.5866 | 0.6881 | 0.6914 |
| 0.5688 | 17.71 | 3400 | 0.5848 | 0.6846 | 0.6905 |
| 0.5664 | 18.75 | 3600 | 0.5884 | 0.6729 | 0.6852 |
| 0.5685 | 19.79 | 3800 | 0.5850 | 0.6848 | 0.6901 |
| 0.5622 | 20.83 | 4000 | 0.5844 | 0.6847 | 0.6882 |
| 0.5624 | 21.88 | 4200 | 0.5881 | 0.6838 | 0.6849 |
| 0.5596 | 22.92 | 4400 | 0.5862 | 0.6853 | 0.6891 |
| 0.5617 | 23.96 | 4600 | 0.5843 | 0.6885 | 0.6898 |
| 0.5599 | 25.0 | 4800 | 0.5830 | 0.6886 | 0.6940 |
| 0.5584 | 26.04 | 5000 | 0.5874 | 0.6839 | 0.6859 |
| 0.5565 | 27.08 | 5200 | 0.5853 | 0.6832 | 0.6843 |
| 0.5557 | 28.12 | 5400 | 0.5837 | 0.6852 | 0.6898 |
| 0.554 | 29.17 | 5600 | 0.5870 | 0.6819 | 0.6852 |
| 0.5543 | 30.21 | 5800 | 0.5883 | 0.6856 | 0.6885 |
| 0.5509 | 31.25 | 6000 | 0.5886 | 0.6846 | 0.6869 |
| 0.5536 | 32.29 | 6200 | 0.5829 | 0.6862 | 0.6918 |
| 0.5529 | 33.33 | 6400 | 0.5874 | 0.6859 | 0.6882 |
| 0.5495 | 34.38 | 6600 | 0.5847 | 0.6912 | 0.6957 |
| 0.5501 | 35.42 | 6800 | 0.5840 | 0.6881 | 0.6927 |
| 0.5507 | 36.46 | 7000 | 0.5848 | 0.6868 | 0.6924 |
| 0.5482 | 37.5 | 7200 | 0.5848 | 0.6889 | 0.6924 |
| 0.5465 | 38.54 | 7400 | 0.5842 | 0.6874 | 0.6918 |
| 0.5518 | 39.58 | 7600 | 0.5826 | 0.6908 | 0.6960 |
| 0.5495 | 40.62 | 7800 | 0.5877 | 0.6859 | 0.6862 |
| 0.5457 | 41.67 | 8000 | 0.5833 | 0.6893 | 0.6921 |
| 0.544 | 42.71 | 8200 | 0.5836 | 0.6927 | 0.6976 |
| 0.548 | 43.75 | 8400 | 0.5827 | 0.6925 | 0.6960 |
| 0.5431 | 44.79 | 8600 | 0.5845 | 0.6912 | 0.6931 |
| 0.542 | 45.83 | 8800 | 0.5829 | 0.6928 | 0.6979 |
| 0.5444 | 46.88 | 9000 | 0.5848 | 0.6899 | 0.6918 |
| 0.5439 | 47.92 | 9200 | 0.5845 | 0.6898 | 0.6921 |
| 0.5454 | 48.96 | 9400 | 0.5835 | 0.6933 | 0.6970 |
| 0.5424 | 50.0 | 9600 | 0.5845 | 0.6906 | 0.6931 |
| 0.5423 | 51.04 | 9800 | 0.5845 | 0.6899 | 0.6927 |
| 0.5406 | 52.08 | 10000 | 0.5845 | 0.6905 | 0.6934 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:20:38+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_46M-L1\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5833
* F1 Score: 0.6924
* Accuracy: 0.6966
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amtibot0
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7046
- Rouge1: 0.3003
- Rouge2: 0.1181
- Rougel: 0.2653
- Rougelsum: 0.2653
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 10 | 4.1094 | 0.3037 | 0.1191 | 0.2643 | 0.2644 | 19.0 |
| No log | 2.0 | 20 | 3.8791 | 0.3019 | 0.1174 | 0.2645 | 0.2644 | 19.0 |
| No log | 3.0 | 30 | 3.7484 | 0.3031 | 0.1202 | 0.2678 | 0.268 | 19.0 |
| No log | 4.0 | 40 | 3.7046 | 0.3003 | 0.1181 | 0.2653 | 0.2653 | 19.0 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-small", "model-index": [{"name": "amtibot0", "results": []}]} | josiahgottfried/amtibot0 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:23:52+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| amtibot0
========
This model is a fine-tuned version of t5-small on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.7046
* Rouge1: 0.3003
* Rouge2: 0.1181
* Rougel: 0.2653
* Rougelsum: 0.2653
* Gen Len: 19.0
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | HenryCai1129/adapter-llama-adapterhappy2sad-1k-50-0.006 | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:27:58+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | gubartz/best_p | null | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:29:17+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #longt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #longt5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1 | {"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora", "results": []}]} | NassimB/mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora | null | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T20:30:04+00:00 | [] | [] | TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1 | [
"# mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
] | [
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# mistral-7b-hf-platypus-lamini-vxxiii-chat-real_baselora\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
] |
text-generation | transformers |
# Keiana-L3-Test5.76-8B-13.6
Keiana-L3-Test5.76-8B-13.6 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.
* [Kaoeiri/Keiana-L3-Test5.2-8B-8](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.2-8B-8)
* [Undi95/Llama-3-LewdPlay-8B](https://huggingface.co/Undi95/Llama-3-LewdPlay-8B)
* [Kaoeiri/Keiana-L3-Test4.7-8B-3](https://huggingface.co/Kaoeiri/Keiana-L3-Test4.7-8B-3)
## 🧩 Configuration
```yaml
merge_method: model_stock
dtype: float16
base_model: Kaoeiri/Keiana-L3-Test5.6-8B-12
models:
- model: Kaoeiri/Keiana-L3-Test5.2-8B-8
parameters:
weight: .4
density: .4
- model: Undi95/Llama-3-LewdPlay-8B
parameters:
weight: .2
density: .4
- model: Kaoeiri/Keiana-L3-Test4.7-8B-3
parameters:
weight: .1
density: .2
parameters:
int8_mask: true
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kaoeiri/Keiana-L3-Test5.76-8B-13.6"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.2-8B-8", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3"], "base_model": ["Kaoeiri/Keiana-L3-Test5.2-8B-8", "Undi95/Llama-3-LewdPlay-8B", "Kaoeiri/Keiana-L3-Test4.7-8B-3"]} | Kaoeiri/Keiana-L3-Test5.76-8B-13.6 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kaoeiri/Keiana-L3-Test5.2-8B-8",
"Undi95/Llama-3-LewdPlay-8B",
"Kaoeiri/Keiana-L3-Test4.7-8B-3",
"conversational",
"base_model:Kaoeiri/Keiana-L3-Test5.2-8B-8",
"base_model:Undi95/Llama-3-LewdPlay-8B",
"base_model:Kaoeiri/Keiana-L3-Test4.7-8B-3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:30:44+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.2-8B-8 #Undi95/Llama-3-LewdPlay-8B #Kaoeiri/Keiana-L3-Test4.7-8B-3 #conversational #base_model-Kaoeiri/Keiana-L3-Test5.2-8B-8 #base_model-Undi95/Llama-3-LewdPlay-8B #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Keiana-L3-Test5.76-8B-13.6
Keiana-L3-Test5.76-8B-13.6 is a merge of the following models using LazyMergekit:
# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.
* Kaoeiri/Keiana-L3-Test5.2-8B-8
* Undi95/Llama-3-LewdPlay-8B
* Kaoeiri/Keiana-L3-Test4.7-8B-3
## Configuration
## Usage
| [
"# Keiana-L3-Test5.76-8B-13.6\n\nKeiana-L3-Test5.76-8B-13.6 is a merge of the following models using LazyMergekit:",
"# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.\n* Kaoeiri/Keiana-L3-Test5.2-8B-8\n* Undi95/Llama-3-LewdPlay-8B\n* Kaoeiri/Keiana-L3-Test4.7-8B-3",
"## Configuration",
"## Usage"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #Kaoeiri/Keiana-L3-Test5.2-8B-8 #Undi95/Llama-3-LewdPlay-8B #Kaoeiri/Keiana-L3-Test4.7-8B-3 #conversational #base_model-Kaoeiri/Keiana-L3-Test5.2-8B-8 #base_model-Undi95/Llama-3-LewdPlay-8B #base_model-Kaoeiri/Keiana-L3-Test4.7-8B-3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Keiana-L3-Test5.76-8B-13.6\n\nKeiana-L3-Test5.76-8B-13.6 is a merge of the following models using LazyMergekit:",
"# Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error.\n* Kaoeiri/Keiana-L3-Test5.2-8B-8\n* Undi95/Llama-3-LewdPlay-8B\n* Kaoeiri/Keiana-L3-Test4.7-8B-3",
"## Configuration",
"## Usage"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | MrezaPRZ/CodeLLama_SFT_GRETEL | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:32:16+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Terjman-Large
This is a translation model from English to Moroccan darija. It is a finetuned version of "Helsinki-NLP/opus-mt-tc-big-en-ar" on the "BounharAbdelaziz/English-to-Moroccan-Darija" dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- warmup_ratio: 0.03
- gradient_accumulation_steps: 2
- total_train_batch_size: 182
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2 | {"language": ["ar", "en"], "tags": ["generated_from_trainer"], "datasets": ["BounharAbdelaziz/English-to-Moroccan-Darija"], "metrics": ["bleu"], "model-index": [{"name": "results", "results": []}]} | atlasia/Terjman-Large | null | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"ar",
"en",
"dataset:BounharAbdelaziz/English-to-Moroccan-Darija",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"has_space"
] | null | 2024-04-26T20:33:18+00:00 | [] | [
"ar",
"en"
] | TAGS
#transformers #safetensors #marian #text2text-generation #generated_from_trainer #ar #en #dataset-BounharAbdelaziz/English-to-Moroccan-Darija #autotrain_compatible #endpoints_compatible #region-us #has_space
|
# Terjman-Large
This is a translation model from English to Moroccan darija. It is a finetuned version of "Helsinki-NLP/opus-mt-tc-big-en-ar" on the "BounharAbdelaziz/English-to-Moroccan-Darija" dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- warmup_ratio: 0.03
- gradient_accumulation_steps: 2
- total_train_batch_size: 182
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.39.2
- Pytorch 2.2.2+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2 | [
"# Terjman-Large\n\nThis is a translation model from English to Moroccan darija. It is a finetuned version of \"Helsinki-NLP/opus-mt-tc-big-en-ar\" on the \"BounharAbdelaziz/English-to-Moroccan-Darija\" dataset.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 96\n- eval_batch_size: 96\n- seed: 42\n- warmup_ratio: 0.03\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 182\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 30",
"### Framework versions\n\n- Transformers 4.39.2\n- Pytorch 2.2.2+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] | [
"TAGS\n#transformers #safetensors #marian #text2text-generation #generated_from_trainer #ar #en #dataset-BounharAbdelaziz/English-to-Moroccan-Darija #autotrain_compatible #endpoints_compatible #region-us #has_space \n",
"# Terjman-Large\n\nThis is a translation model from English to Moroccan darija. It is a finetuned version of \"Helsinki-NLP/opus-mt-tc-big-en-ar\" on the \"BounharAbdelaziz/English-to-Moroccan-Darija\" dataset.",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 96\n- eval_batch_size: 96\n- seed: 42\n- warmup_ratio: 0.03\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 182\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 30",
"### Framework versions\n\n- Transformers 4.39.2\n- Pytorch 2.2.2+cpu\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# danish-bert-botxo-fb-housing-posts
This model is a fine-tuned version of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1659
- Accuracy: 0.9519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.372 | 1.0 | 55 | 0.2514 | 0.9251 |
| 0.171 | 2.0 | 110 | 0.1881 | 0.9305 |
| 0.2315 | 3.0 | 165 | 0.1854 | 0.9465 |
| 0.1284 | 4.0 | 220 | 0.1745 | 0.9465 |
| 0.0353 | 5.0 | 275 | 0.1659 | 0.9519 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "Maltehb/danish-bert-botxo", "model-index": [{"name": "danish-bert-botxo-fb-housing-posts", "results": []}]} | hoaj/danish-bert-botxo-fb-housing-posts | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:Maltehb/danish-bert-botxo",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:36:20+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-Maltehb/danish-bert-botxo #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us
| danish-bert-botxo-fb-housing-posts
==================================
This model is a fine-tuned version of Maltehb/danish-bert-botxo on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1659
* Accuracy: 0.9519
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.40.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-Maltehb/danish-bert-botxo #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_46M-L8_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5959
- F1 Score: 0.6921
- Accuracy: 0.6927
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6344 | 1.04 | 200 | 0.6225 | 0.6144 | 0.6575 |
| 0.6043 | 2.08 | 400 | 0.6163 | 0.6645 | 0.6631 |
| 0.5927 | 3.12 | 600 | 0.5941 | 0.6726 | 0.6839 |
| 0.5884 | 4.17 | 800 | 0.5944 | 0.6772 | 0.6787 |
| 0.5821 | 5.21 | 1000 | 0.5999 | 0.6648 | 0.6771 |
| 0.5781 | 6.25 | 1200 | 0.5889 | 0.6766 | 0.6830 |
| 0.57 | 7.29 | 1400 | 0.5864 | 0.6840 | 0.6852 |
| 0.5676 | 8.33 | 1600 | 0.5910 | 0.6843 | 0.6865 |
| 0.5642 | 9.38 | 1800 | 0.5859 | 0.6850 | 0.6849 |
| 0.5589 | 10.42 | 2000 | 0.5879 | 0.6746 | 0.6856 |
| 0.5594 | 11.46 | 2200 | 0.5878 | 0.6760 | 0.6888 |
| 0.5474 | 12.5 | 2400 | 0.5921 | 0.6837 | 0.6882 |
| 0.547 | 13.54 | 2600 | 0.5910 | 0.6845 | 0.6875 |
| 0.546 | 14.58 | 2800 | 0.5962 | 0.6778 | 0.6888 |
| 0.5445 | 15.62 | 3000 | 0.5916 | 0.6875 | 0.6908 |
| 0.5372 | 16.67 | 3200 | 0.5967 | 0.6834 | 0.6869 |
| 0.5345 | 17.71 | 3400 | 0.5937 | 0.6843 | 0.6869 |
| 0.5332 | 18.75 | 3600 | 0.5958 | 0.6911 | 0.6973 |
| 0.5322 | 19.79 | 3800 | 0.5955 | 0.6925 | 0.6931 |
| 0.5259 | 20.83 | 4000 | 0.5892 | 0.6922 | 0.6973 |
| 0.5225 | 21.88 | 4200 | 0.6037 | 0.6956 | 0.6973 |
| 0.5192 | 22.92 | 4400 | 0.6015 | 0.6874 | 0.6885 |
| 0.5181 | 23.96 | 4600 | 0.6004 | 0.6907 | 0.6905 |
| 0.5149 | 25.0 | 4800 | 0.6000 | 0.6964 | 0.6966 |
| 0.5119 | 26.04 | 5000 | 0.6099 | 0.6871 | 0.6891 |
| 0.5081 | 27.08 | 5200 | 0.6031 | 0.6916 | 0.6924 |
| 0.5067 | 28.12 | 5400 | 0.6027 | 0.6900 | 0.6937 |
| 0.5007 | 29.17 | 5600 | 0.6006 | 0.6901 | 0.6937 |
| 0.4993 | 30.21 | 5800 | 0.6142 | 0.6915 | 0.6934 |
| 0.4967 | 31.25 | 6000 | 0.6157 | 0.6846 | 0.6856 |
| 0.4958 | 32.29 | 6200 | 0.6117 | 0.6835 | 0.6872 |
| 0.4951 | 33.33 | 6400 | 0.6141 | 0.6894 | 0.6891 |
| 0.489 | 34.38 | 6600 | 0.6234 | 0.6854 | 0.6905 |
| 0.4901 | 35.42 | 6800 | 0.6234 | 0.6812 | 0.6833 |
| 0.4895 | 36.46 | 7000 | 0.6140 | 0.6878 | 0.6888 |
| 0.4838 | 37.5 | 7200 | 0.6209 | 0.6917 | 0.6927 |
| 0.4808 | 38.54 | 7400 | 0.6218 | 0.6866 | 0.6895 |
| 0.4883 | 39.58 | 7600 | 0.6064 | 0.6914 | 0.6970 |
| 0.4815 | 40.62 | 7800 | 0.6235 | 0.6883 | 0.6878 |
| 0.4764 | 41.67 | 8000 | 0.6188 | 0.6831 | 0.6849 |
| 0.473 | 42.71 | 8200 | 0.6329 | 0.6819 | 0.6833 |
| 0.4764 | 43.75 | 8400 | 0.6181 | 0.6878 | 0.6901 |
| 0.473 | 44.79 | 8600 | 0.6212 | 0.6869 | 0.6885 |
| 0.4728 | 45.83 | 8800 | 0.6212 | 0.6860 | 0.6882 |
| 0.4713 | 46.88 | 9000 | 0.6265 | 0.6852 | 0.6856 |
| 0.472 | 47.92 | 9200 | 0.6250 | 0.6842 | 0.6852 |
| 0.4705 | 48.96 | 9400 | 0.6273 | 0.6846 | 0.6875 |
| 0.4695 | 50.0 | 9600 | 0.6263 | 0.6861 | 0.6865 |
| 0.4686 | 51.04 | 9800 | 0.6281 | 0.6839 | 0.6849 |
| 0.4644 | 52.08 | 10000 | 0.6287 | 0.6842 | 0.6849 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_46M-L8_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_46M-L8_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:40:14+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_46M-L8\_f
================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5959
* F1 Score: 0.6921
* Accuracy: 0.6927
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text-generation | transformers |
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
| {"language": "en", "license": "mit", "tags": ["exbert"]} | ChatTechAI/Babbage-V1 | null | [
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:40:45+00:00 | [] | [
"en"
] | TAGS
#transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| GPT-2
=====
Test the whole generation capabilities here: URL
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
this paper
and first released at this page.
Disclaimer: The team releasing GPT-2 also wrote a
model card for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
Model description
-----------------
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the smallest version of GPT-2, with 124M parameters.
Related Models: GPT-Large, GPT-Medium and GPT-XL
Intended uses & limitations
---------------------------
You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
model card:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Here's an example of how the model can have biased predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
Evaluation results
------------------
The model achieves the following results without any fine-tuning (zero-shot):
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
| [
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] | [
"TAGS\n#transformers #pytorch #tf #jax #tflite #rust #onnx #safetensors #gpt2 #text-generation #exbert #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
null | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 120
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "output", "results": []}]} | soft-boy/decision_transformer | null | [
"transformers",
"tensorboard",
"safetensors",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:41:20+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #decision_transformer #generated_from_trainer #endpoints_compatible #region-us
|
# output
This model is a fine-tuned version of [](URL on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 120
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| [
"# output\n\nThis model is a fine-tuned version of [](URL on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 120",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #decision_transformer #generated_from_trainer #endpoints_compatible #region-us \n",
"# output\n\nThis model is a fine-tuned version of [](URL on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 64\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 120",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/maywell/miqu-evil-dpo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/miqu-evil-dpo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ1_S.gguf) | i1-IQ1_S | 14.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ1_M.gguf) | i1-IQ1_M | 16.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.4 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.4 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_S.gguf) | i1-IQ2_S | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ2_M.gguf) | i1-IQ2_M | 23.3 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q2_K.gguf) | i1-Q2_K | 25.6 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 26.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.4 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_S.gguf) | i1-IQ3_S | 30.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ3_M.gguf) | i1-IQ3_M | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 36.9 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q4_0.gguf) | i1-Q4_0 | 39.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.6 | |
| [GGUF](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 48.9 | |
| [PART 1](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miqu-evil-dpo-i1-GGUF/resolve/main/miqu-evil-dpo.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 56.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["not-for-all-audiences"], "base_model": "maywell/miqu-evil-dpo", "license_link": "LICENSE", "license_name": "miqu-license", "quantized_by": "mradermacher"} | mradermacher/miqu-evil-dpo-i1-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:maywell/miqu-evil-dpo",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:42:21+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #not-for-all-audiences #en #base_model-maywell/miqu-evil-dpo #license-other #endpoints_compatible #region-us
| About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #not-for-all-audiences #en #base_model-maywell/miqu-evil-dpo #license-other #endpoints_compatible #region-us \n"
] |
image-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold2
This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224](https://huggingface.co/microsoft/swin-large-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1478
- Accuracy: 0.6735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2127 | 1.0 | 923 | 1.1541 | 0.6049 |
| 0.9204 | 2.0 | 1846 | 1.0192 | 0.6551 |
| 0.7902 | 3.0 | 2769 | 0.9565 | 0.6649 |
| 0.7847 | 4.0 | 3692 | 0.9757 | 0.6743 |
| 0.6816 | 5.0 | 4615 | 0.9792 | 0.6692 |
| 0.4595 | 6.0 | 5538 | 1.0390 | 0.6678 |
| 0.425 | 7.0 | 6461 | 1.0878 | 0.6678 |
| 0.4186 | 8.0 | 7384 | 1.1180 | 0.6727 |
| 0.3048 | 9.0 | 8307 | 1.1393 | 0.6708 |
| 0.2135 | 10.0 | 9230 | 1.1478 | 0.6735 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold2", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6735135135135135, "name": "Accuracy"}]}]}]} | onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224_fold2 | null | [
"transformers",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-large-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:42:35+00:00 | [] | [] | TAGS
#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
| Boya1\_RMSProp\_1-e5\_10Epoch\_swin-large-patch4-window7-224\_fold2
===================================================================
This model is a fine-tuned version of microsoft/swin-large-patch4-window7-224 on the imagefolder dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1478
* Accuracy: 0.6735
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.35.0
* Pytorch 2.1.0
* Datasets 2.14.6
* Tokenizers 0.14.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] | [
"TAGS\n#transformers #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-large-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.0\n* Datasets 2.14.6\n* Tokenizers 0.14.1"
] |
text-generation | transformers | # merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [aaditya/OpenBioLLM-Llama3-8B](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B)
* [mattshumer/Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: aaditya/OpenBioLLM-Llama3-8B
layer_range: [0, 32]
- model: mattshumer/Llama-3-8B-16K
layer_range: [0, 32]
merge_method: slerp
base_model: aaditya/OpenBioLLM-Llama3-8B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
| {"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["aaditya/OpenBioLLM-Llama3-8B", "mattshumer/Llama-3-8B-16K"]} | o2satz/L3_med16 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:aaditya/OpenBioLLM-Llama3-8B",
"base_model:mattshumer/Llama-3-8B-16K",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:43:19+00:00 | [] | [] | TAGS
#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-aaditya/OpenBioLLM-Llama3-8B #base_model-mattshumer/Llama-3-8B-16K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| # merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* aaditya/OpenBioLLM-Llama3-8B
* mattshumer/Llama-3-8B-16K
### Configuration
The following YAML configuration was used to produce this model:
| [
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* aaditya/OpenBioLLM-Llama3-8B\n* mattshumer/Llama-3-8B-16K",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-aaditya/OpenBioLLM-Llama3-8B #base_model-mattshumer/Llama-3-8B-16K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* aaditya/OpenBioLLM-Llama3-8B\n* mattshumer/Llama-3-8B-16K",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | woransa/OrpoLlama-3-8B | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:43:52+00:00 | [
"1910.09700"
] | [] | TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
| [
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K4me2-seqsight_4096_512_46M-L32_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K4me2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K4me2) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5838
- F1 Score: 0.6957
- Accuracy: 0.6970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.6296 | 1.04 | 200 | 0.6099 | 0.6444 | 0.6719 |
| 0.5961 | 2.08 | 400 | 0.6079 | 0.6722 | 0.6712 |
| 0.5838 | 3.12 | 600 | 0.5955 | 0.6792 | 0.6852 |
| 0.5789 | 4.17 | 800 | 0.5935 | 0.6831 | 0.6849 |
| 0.5677 | 5.21 | 1000 | 0.5989 | 0.6704 | 0.6794 |
| 0.5599 | 6.25 | 1200 | 0.5914 | 0.6868 | 0.6885 |
| 0.5498 | 7.29 | 1400 | 0.5906 | 0.6822 | 0.6852 |
| 0.5397 | 8.33 | 1600 | 0.5999 | 0.6849 | 0.6895 |
| 0.5339 | 9.38 | 1800 | 0.6028 | 0.6835 | 0.6826 |
| 0.5233 | 10.42 | 2000 | 0.6158 | 0.6733 | 0.6839 |
| 0.5159 | 11.46 | 2200 | 0.6140 | 0.6795 | 0.6888 |
| 0.4976 | 12.5 | 2400 | 0.6165 | 0.6813 | 0.6856 |
| 0.4901 | 13.54 | 2600 | 0.6297 | 0.6852 | 0.6869 |
| 0.483 | 14.58 | 2800 | 0.6441 | 0.6727 | 0.6794 |
| 0.474 | 15.62 | 3000 | 0.6274 | 0.6781 | 0.6823 |
| 0.4618 | 16.67 | 3200 | 0.6527 | 0.6793 | 0.6807 |
| 0.4507 | 17.71 | 3400 | 0.6513 | 0.6815 | 0.6826 |
| 0.4411 | 18.75 | 3600 | 0.6755 | 0.6822 | 0.6849 |
| 0.4325 | 19.79 | 3800 | 0.6762 | 0.6808 | 0.6817 |
| 0.4178 | 20.83 | 4000 | 0.6663 | 0.6830 | 0.6885 |
| 0.4111 | 21.88 | 4200 | 0.7240 | 0.6796 | 0.6810 |
| 0.3942 | 22.92 | 4400 | 0.6946 | 0.6762 | 0.6768 |
| 0.3911 | 23.96 | 4600 | 0.7232 | 0.6850 | 0.6839 |
| 0.3827 | 25.0 | 4800 | 0.7319 | 0.6730 | 0.6722 |
| 0.3718 | 26.04 | 5000 | 0.7550 | 0.6841 | 0.6869 |
| 0.3595 | 27.08 | 5200 | 0.7298 | 0.6854 | 0.6846 |
| 0.3549 | 28.12 | 5400 | 0.7805 | 0.6755 | 0.6787 |
| 0.3477 | 29.17 | 5600 | 0.7857 | 0.6724 | 0.6761 |
| 0.3431 | 30.21 | 5800 | 0.7814 | 0.6853 | 0.6862 |
| 0.3351 | 31.25 | 6000 | 0.8192 | 0.6722 | 0.6751 |
| 0.3288 | 32.29 | 6200 | 0.8572 | 0.6694 | 0.6729 |
| 0.3269 | 33.33 | 6400 | 0.8028 | 0.6777 | 0.6768 |
| 0.3173 | 34.38 | 6600 | 0.8300 | 0.6711 | 0.6732 |
| 0.3119 | 35.42 | 6800 | 0.8257 | 0.6722 | 0.6732 |
| 0.3029 | 36.46 | 7000 | 0.8348 | 0.6721 | 0.6725 |
| 0.3015 | 37.5 | 7200 | 0.8823 | 0.6728 | 0.6738 |
| 0.2983 | 38.54 | 7400 | 0.8730 | 0.6689 | 0.6716 |
| 0.2983 | 39.58 | 7600 | 0.8422 | 0.6660 | 0.6706 |
| 0.2871 | 40.62 | 7800 | 0.8960 | 0.6723 | 0.6732 |
| 0.2812 | 41.67 | 8000 | 0.9115 | 0.6643 | 0.6641 |
| 0.2769 | 42.71 | 8200 | 0.9336 | 0.6722 | 0.6745 |
| 0.2743 | 43.75 | 8400 | 0.9085 | 0.6730 | 0.6738 |
| 0.2707 | 44.79 | 8600 | 0.9092 | 0.6707 | 0.6716 |
| 0.2732 | 45.83 | 8800 | 0.9188 | 0.6715 | 0.6735 |
| 0.2715 | 46.88 | 9000 | 0.9204 | 0.6747 | 0.6755 |
| 0.2623 | 47.92 | 9200 | 0.9272 | 0.6707 | 0.6712 |
| 0.2654 | 48.96 | 9400 | 0.9270 | 0.6700 | 0.6716 |
| 0.2598 | 50.0 | 9600 | 0.9354 | 0.6737 | 0.6738 |
| 0.2611 | 51.04 | 9800 | 0.9368 | 0.6737 | 0.6742 |
| 0.2565 | 52.08 | 10000 | 0.9348 | 0.6711 | 0.6716 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K4me2-seqsight_4096_512_46M-L32_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K4me2-seqsight_4096_512_46M-L32_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:44:24+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K4me2-seqsight\_4096\_512\_46M-L32\_f
=================================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K4me2 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5838
* F1 Score: 0.6957
* Accuracy: 0.6970
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f
This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_4096_512_46M](https://huggingface.co/mahdibaghbanzadeh/seqsight_4096_512_46M) on the [mahdibaghbanzadeh/GUE_EMP_H3K9ac](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_EMP_H3K9ac) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4618
- F1 Score: 0.8001
- Accuracy: 0.7996
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 10000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.5729 | 1.15 | 200 | 0.5451 | 0.7280 | 0.7290 |
| 0.5306 | 2.3 | 400 | 0.5608 | 0.7185 | 0.7200 |
| 0.512 | 3.45 | 600 | 0.5266 | 0.7362 | 0.7359 |
| 0.5029 | 4.6 | 800 | 0.5154 | 0.7436 | 0.7431 |
| 0.4963 | 5.75 | 1000 | 0.5082 | 0.7440 | 0.7445 |
| 0.4907 | 6.9 | 1200 | 0.5116 | 0.7515 | 0.7510 |
| 0.4837 | 8.05 | 1400 | 0.5103 | 0.7524 | 0.7521 |
| 0.48 | 9.2 | 1600 | 0.5221 | 0.7459 | 0.7463 |
| 0.4729 | 10.34 | 1800 | 0.5101 | 0.7541 | 0.7539 |
| 0.4742 | 11.49 | 2000 | 0.5007 | 0.7596 | 0.7596 |
| 0.4669 | 12.64 | 2200 | 0.5137 | 0.7549 | 0.7546 |
| 0.4675 | 13.79 | 2400 | 0.4950 | 0.7656 | 0.7654 |
| 0.4648 | 14.94 | 2600 | 0.4951 | 0.7651 | 0.7647 |
| 0.4611 | 16.09 | 2800 | 0.5000 | 0.7629 | 0.7625 |
| 0.4573 | 17.24 | 3000 | 0.5075 | 0.7616 | 0.7611 |
| 0.4572 | 18.39 | 3200 | 0.5053 | 0.7625 | 0.7621 |
| 0.4581 | 19.54 | 3400 | 0.4920 | 0.7652 | 0.7647 |
| 0.4508 | 20.69 | 3600 | 0.4946 | 0.7632 | 0.7632 |
| 0.4475 | 21.84 | 3800 | 0.4949 | 0.7641 | 0.7639 |
| 0.4479 | 22.99 | 4000 | 0.4966 | 0.7630 | 0.7629 |
| 0.4468 | 24.14 | 4200 | 0.4915 | 0.7658 | 0.7657 |
| 0.4375 | 25.29 | 4400 | 0.5056 | 0.7644 | 0.7639 |
| 0.4442 | 26.44 | 4600 | 0.4948 | 0.7619 | 0.7614 |
| 0.4416 | 27.59 | 4800 | 0.5015 | 0.7672 | 0.7668 |
| 0.4381 | 28.74 | 5000 | 0.4962 | 0.7631 | 0.7629 |
| 0.4409 | 29.89 | 5200 | 0.4953 | 0.7659 | 0.7654 |
| 0.4345 | 31.03 | 5400 | 0.4977 | 0.7658 | 0.7654 |
| 0.4345 | 32.18 | 5600 | 0.4902 | 0.7655 | 0.7654 |
| 0.4294 | 33.33 | 5800 | 0.5008 | 0.7656 | 0.7654 |
| 0.4378 | 34.48 | 6000 | 0.4893 | 0.7666 | 0.7661 |
| 0.4267 | 35.63 | 6200 | 0.4947 | 0.7699 | 0.7697 |
| 0.434 | 36.78 | 6400 | 0.4922 | 0.7652 | 0.7647 |
| 0.4283 | 37.93 | 6600 | 0.5046 | 0.7654 | 0.7650 |
| 0.4271 | 39.08 | 6800 | 0.4893 | 0.7691 | 0.7686 |
| 0.4252 | 40.23 | 7000 | 0.4951 | 0.7623 | 0.7618 |
| 0.4233 | 41.38 | 7200 | 0.4940 | 0.7655 | 0.7650 |
| 0.425 | 42.53 | 7400 | 0.4938 | 0.7687 | 0.7683 |
| 0.426 | 43.68 | 7600 | 0.4903 | 0.7708 | 0.7704 |
| 0.4194 | 44.83 | 7800 | 0.4950 | 0.7648 | 0.7643 |
| 0.424 | 45.98 | 8000 | 0.4897 | 0.7694 | 0.7690 |
| 0.4236 | 47.13 | 8200 | 0.4926 | 0.7670 | 0.7665 |
| 0.4186 | 48.28 | 8400 | 0.4926 | 0.7669 | 0.7665 |
| 0.4177 | 49.43 | 8600 | 0.4937 | 0.7662 | 0.7657 |
| 0.4183 | 50.57 | 8800 | 0.4941 | 0.7669 | 0.7665 |
| 0.4197 | 51.72 | 9000 | 0.4950 | 0.7659 | 0.7654 |
| 0.4179 | 52.87 | 9200 | 0.4951 | 0.7655 | 0.7650 |
| 0.4188 | 54.02 | 9400 | 0.4934 | 0.7673 | 0.7668 |
| 0.4183 | 55.17 | 9600 | 0.4939 | 0.7673 | 0.7668 |
| 0.4171 | 56.32 | 9800 | 0.4922 | 0.7687 | 0.7683 |
| 0.4187 | 57.47 | 10000 | 0.4928 | 0.7684 | 0.7679 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 | {"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_4096_512_46M", "model-index": [{"name": "GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f", "results": []}]} | mahdibaghbanzadeh/GUE_EMP_H3K9ac-seqsight_4096_512_46M-L1_f | null | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:mahdibaghbanzadeh/seqsight_4096_512_46M",
"region:us"
] | null | 2024-04-26T20:44:38+00:00 | [] | [] | TAGS
#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us
| GUE\_EMP\_H3K9ac-seqsight\_4096\_512\_46M-L1\_f
===============================================
This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_4096\_512\_46M on the mahdibaghbanzadeh/GUE\_EMP\_H3K9ac dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4618
* F1 Score: 0.8001
* Accuracy: 0.7996
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0005
* train\_batch\_size: 128
* eval\_batch\_size: 128
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* training\_steps: 10000
### Training results
### Framework versions
* PEFT 0.9.0
* Transformers 4.38.2
* Pytorch 2.2.0+cu121
* Datasets 2.17.1
* Tokenizers 0.15.2
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] | [
"TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_4096_512_46M #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2"
] |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UrduSum5k_multi2
This model is a fine-tuned version of [ahmed0189/mT5-Arabic-text-summarization](https://huggingface.co/ahmed0189/mT5-Arabic-text-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2260
- Rouge1: 0.14
- Rouge2: 0.0333
- Rougel: 0.14
- Rougelsum: 0.14
- Gen Len: 34.6705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.8803 | 1.0 | 1500 | 3.2260 | 0.14 | 0.0333 | 0.14 | 0.14 | 34.6705 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.13.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "model-index": [{"name": "UrduSum5k_multi2", "results": []}]} | Alefiah/UrduSum5k_multi2 | null | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:45:02+00:00 | [] | [] | TAGS
#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| UrduSum5k\_multi2
=================
This model is a fine-tuned version of ahmed0189/mT5-Arabic-text-summarization on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 3.2260
* Rouge1: 0.14
* Rouge2: 0.0333
* Rougel: 0.14
* Rougelsum: 0.14
* Gen Len: 34.6705
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.28.0
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.13.3
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] | [
"TAGS\n#transformers #pytorch #tensorboard #mt5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3"
] |
reinforcement-learning | stable-baselines3 |
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
Follow to eval the agent locally:
```python
repo_id = "Laz4rz/hf-LunarLander-1-ppo" # The repo_id
filename = "ppo-LunarLander-v2.zip" # The model filename.zip
checkpoint = load_from_hub(repo_id, filename)
model = PPO.load(checkpoint)
eval_env = Monitor(gym.make("LunarLander-v2"))
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
...
```
| {"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "261.43 +/- 17.17", "name": "mean_reward", "verified": false}]}]}]} | Laz4rz/hf-LunarLander-1-ppo | null | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null | 2024-04-26T20:45:27+00:00 | [] | [] | TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
Follow to eval the agent locally:
| [
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nFollow to eval the agent locally:"
] | [
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nFollow to eval the agent locally:"
] |
text-generation | null |
# Phi-3-mini-4k-instruct - llamafile
This repository contains executable weights (which we call
[llamafiles](https://github.com/Mozilla-Ocho/llamafile)) that run on
Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
## Quickstart
Assuming your system has at least 32GB of RAM, you can try running the
following command which download, concatenate, and execute the model.
```
wget https://huggingface.co/jartine/Phi-3-mini-4k-instruct-llamafile/resolve/main/Phi-3-mini-4k-instruct.F16.llamafile
chmod +x Phi-3-mini-4k-instruct.F16.llamafile
./Phi-3-mini-4k-instruct.F16.llamafile --help # view manual
./Phi-3-mini-4k-instruct.F16.llamafile # launch web gui + oai api
./Phi-3-mini-4k-instruct.F16.llamafile -p ... # cli interface (scriptable)
```
Alternatively, you may download an official `llamafile` executable from
Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles
as a simple weights data file.
```
llamafile -m ./Phi-3-mini-4k-instruct.F16.llamafile ...
```
For further information, please see the [llamafile
README](https://github.com/mozilla-ocho/llamafile/).
Having **trouble?** See the ["Gotchas"
section](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas)
of the README.
## Prompting
Prompt template:
```
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
Command template:
```
./Phi-3-mini-4k-instruct.F16.llamafile -e -p "<|user|>\n{{prompt}}<|end|>\n<|assistant|>"
```
## About llamafile
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
It uses Cosmopolitan Libc to turn LLM weights into runnable llama.cpp
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
In addition to being executables, llamafiles are also zip archives. Each
llamafile contains a GGUF file, which you can extract using the `unzip`
command. If you want to change or add files to your llamafiles, then the
`zipalign` command (distributed on the llamafile github) should be used
instead of the traditional `zip` command.
## Licensing (Phi-3 Specific)
The Phi-3 llamafiles are licensed Apache 2.0 because some of the
software that went into creating these llamafiles uses that as its
license. The Phi-3 weights themselves were published by Microsoft under
the even more permissive MIT license. You can use the `unzip` command to
extract the MIT-licensed GGUF file from each llamafile, which contains
only the Microsoft Phi-3 weights.
For further details on the complete picture, read our `LICENSE` file,
since it documents the copyright notice of every transitive dependency.
## About Quantization Formats (General Advice)
Your choice of quantization format depends on three things:
1. Will it fit in RAM or VRAM?
2. Is your use case reading (e.g. summarization) or writing (e.g. chatbot)?
3. llamafiles bigger than 4.30 GB are hard to run on Windows (see [gotchas](https://github.com/mozilla-ocho/llamafile/?tab=readme-ov-file#gotchas))
Good quants for writing (prediction speed) are Q5\_K\_M, and Q4\_0. Text
generation is bounded by memory speed, so smaller quants help, but they
cause the LLM to hallucinate more. However that doesn't mean they can't
think correctly. A highly degraded quant like `Q2_K` may not make a
great encyclopedia, but it's still capable of logical reasoning and
the emergent capabilities LLMs exhibit.
Good quants for reading (evaluation speed) are BF16, F16, Q8\_0, and
Q4\_0 (ordered from fastest to slowest). Prompt evaluation is bounded by
flop count, which means perf can be improved through software
engineering alone, e.g. BLAS algorithms, in which case quantization
starts hurting more than it helps, since it competes for CPU resources
and makes it harder for the compiler to parallelize instructions. You
want to ideally use the simplest smallest floating point format that's
natively implemented by your hardware. In most cases, that's BF16 or
FP16. However, llamafile is able to still offer respectable tinyBLAS
speedups for llama.cpp's simplest quants: Q8\_0 and Q4\_0.
--
## Model Summary
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/phi3blog-april)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ Phi-3 GGUF: [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Phi-3 ONNX: [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3 Mini-4K-Instruct is also available in [HuggingChat](https://aka.ms/try-phi3-hf-chat).
### Tokenizer
Phi-3 Mini-4K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful AI assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
messages = [
{"role": "system", "content": "You are a helpful digital assistant. Please provide safe, ethical and accurate information to the user."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| | Phi-3-Mini-4K-In<br>3.8b | Phi-3-Small<br>7b (preview) | Phi-3-Medium<br>14b (preview) | Phi-2<br>2.7b | Mistral<br>7b | Gemma<br>7b | Llama-3-In<br>8b | Mixtral<br>8x7b | GPT-3.5<br>version 1106 |
|---|---|---|---|---|---|---|---|---|---|
| MMLU <br>5-Shot | 68.8 | 75.3 | 78.2 | 56.3 | 61.7 | 63.6 | 66.5 | 68.4 | 71.4 |
| HellaSwag <br> 5-Shot | 76.7 | 78.7 | 83.2 | 53.6 | 58.5 | 49.8 | 71.1 | 70.4 | 78.8 |
| ANLI <br> 7-Shot | 52.8 | 55.0 | 58.7 | 42.5 | 47.1 | 48.7 | 57.3 | 55.2 | 58.1 |
| GSM-8K <br> 0-Shot; CoT | 82.5 | 86.4 | 90.8 | 61.1 | 46.4 | 59.8 | 77.4 | 64.7 | 78.1 |
| MedQA <br> 2-Shot | 53.8 | 58.2 | 69.8 | 40.9 | 49.6 | 50.0 | 60.5 | 62.2 | 63.4 |
| AGIEval <br> 0-Shot | 37.5 | 45.0 | 49.7 | 29.8 | 35.1 | 42.1 | 42.0 | 45.2 | 48.4 |
| TriviaQA <br> 5-Shot | 64.0 | 59.1 | 73.3 | 45.2 | 72.3 | 75.2 | 67.7 | 82.2 | 85.8 |
| Arc-C <br> 10-Shot | 84.9 | 90.7 | 91.9 | 75.9 | 78.6 | 78.3 | 82.8 | 87.3 | 87.4 |
| Arc-E <br> 10-Shot | 94.6 | 97.1 | 98.0 | 88.5 | 90.6 | 91.4 | 93.4 | 95.6 | 96.3 |
| PIQA <br> 5-Shot | 84.2 | 87.8 | 88.2 | 60.2 | 77.7 | 78.1 | 75.7 | 86.0 | 86.6 |
| SociQA <br> 5-Shot | 76.6 | 79.0 | 79.4 | 68.3 | 74.6 | 65.5 | 73.9 | 75.9 | 68.3 |
| BigBench-Hard <br> 0-Shot | 71.7 | 75.0 | 82.5 | 59.4 | 57.3 | 59.6 | 51.5 | 69.7 | 68.32 |
| WinoGrande <br> 5-Shot | 70.8 | 82.5 | 81.2 | 54.7 | 54.2 | 55.6 | 65 | 62.0 | 68.8 |
| OpenBookQA <br> 10-Shot | 83.2 | 88.4 | 86.6 | 73.6 | 79.8 | 78.6 | 82.6 | 85.8 | 86.0 |
| BoolQ <br> 0-Shot | 77.6 | 82.9 | 86.5 | -- | 72.2 | 66.0 | 80.9 | 77.6 | 79.1 |
| CommonSenseQA <br> 10-Shot | 80.2 | 80.3 | 82.6 | 69.3 | 72.6 | 76.2 | 79 | 78.1 | 79.6 |
| TruthfulQA <br> 10-Shot | 65.0 | 68.1 | 74.8 | -- | 52.1 | 53.0 | 63.2 | 60.1 | 85.8 |
| HumanEval <br> 0-Shot | 59.1 | 59.1 | 54.7 | 47.0 | 28.0 | 34.1 | 60.4 | 37.8 | 62.2 |
| MBPP <br> 3-Shot | 53.8 | 71.4 | 73.7 | 60.6 | 50.8 | 51.5 | 67.7 | 60.2 | 77.8 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* CPU: use the **GGUF** quantized models [4K](https://aka.ms/Phi3-mini-4k-instruct-gguf)
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [4K](https://aka.ms/Phi3-mini-4k-instruct-onnx)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model [here](https://aka.ms/phi3-mini-4k-instruct-onnx).
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-4k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| {"language": ["en"], "license": "apache-2.0", "tags": ["llamafile"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "pipeline_tag": "text-generation", "prompt_template": "<|system|>\nYou are a helpful AI assistant.<|end|>\n<|user|>\n{{prompt}}<|end|>\n<|assistant|>"} | jartine/Phi-3-mini-4k-instruct-llamafile | null | [
"llamafile",
"text-generation",
"en",
"base_model:microsoft/Phi-3-mini-4k-instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-04-26T20:47:56+00:00 | [] | [
"en"
] | TAGS
#llamafile #text-generation #en #base_model-microsoft/Phi-3-mini-4k-instruct #license-apache-2.0 #region-us
| Phi-3-mini-4k-instruct - llamafile
==================================
This repository contains executable weights (which we call
llamafiles) that run on
Linux, MacOS, Windows, FreeBSD, OpenBSD, and NetBSD for AMD64 and ARM64.
* Model creator: Microsoft
* Original model: microsoft/Phi-3-mini-4k-instruct
Quickstart
----------
Assuming your system has at least 32GB of RAM, you can try running the
following command which download, concatenate, and execute the model.
Alternatively, you may download an official 'llamafile' executable from
Mozilla Ocho on GitHub, in which case you can use the Mixtral llamafiles
as a simple weights data file.
For further information, please see the llamafile
README.
Having trouble? See the "Gotchas"
section
of the README.
Prompting
---------
Prompt template:
Command template:
About llamafile
---------------
llamafile is a new format introduced by Mozilla Ocho on Nov 20th 2023.
It uses Cosmopolitan Libc to turn LLM weights into runnable URL
binaries that run on the stock installs of six OSes for both ARM64 and
AMD64.
In addition to being executables, llamafiles are also zip archives. Each
llamafile contains a GGUF file, which you can extract using the 'unzip'
command. If you want to change or add files to your llamafiles, then the
'zipalign' command (distributed on the llamafile github) should be used
instead of the traditional 'zip' command.
Licensing (Phi-3 Specific)
--------------------------
The Phi-3 llamafiles are licensed Apache 2.0 because some of the
software that went into creating these llamafiles uses that as its
license. The Phi-3 weights themselves were published by Microsoft under
the even more permissive MIT license. You can use the 'unzip' command to
extract the MIT-licensed GGUF file from each llamafile, which contains
only the Microsoft Phi-3 weights.
For further details on the complete picture, read our 'LICENSE' file,
since it documents the copyright notice of every transitive dependency.
About Quantization Formats (General Advice)
-------------------------------------------
Your choice of quantization format depends on three things:
1. Will it fit in RAM or VRAM?
2. Is your use case reading (e.g. summarization) or writing (e.g. chatbot)?
3. llamafiles bigger than 4.30 GB are hard to run on Windows (see gotchas)
Good quants for writing (prediction speed) are Q5\_K\_M, and Q4\_0. Text
generation is bounded by memory speed, so smaller quants help, but they
cause the LLM to hallucinate more. However that doesn't mean they can't
think correctly. A highly degraded quant like 'Q2\_K' may not make a
great encyclopedia, but it's still capable of logical reasoning and
the emergent capabilities LLMs exhibit.
Good quants for reading (evaluation speed) are BF16, F16, Q8\_0, and
Q4\_0 (ordered from fastest to slowest). Prompt evaluation is bounded by
flop count, which means perf can be improved through software
engineering alone, e.g. BLAS algorithms, in which case quantization
starts hurting more than it helps, since it competes for CPU resources
and makes it harder for the compiler to parallelize instructions. You
want to ideally use the simplest smallest floating point format that's
natively implemented by your hardware. In most cases, that's BF16 or
FP16. However, llamafile is able to still offer respectable tinyBLAS
speedups for URL's simplest quants: Q8\_0 and Q4\_0.
--
Model Summary
-------------
The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
Resources and Technical Documentation:
* Phi-3 Microsoft Blog
* Phi-3 Technical Report
* Phi-3 on Azure AI Studio
* Phi-3 GGUF: 4K
* Phi-3 ONNX: 4K
Intended Uses
-------------
Primary use cases
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1. Memory/compute constrained environments
2. Latency bound scenarios
3. Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
Use case considerations
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
How to Use
----------
Phi-3 Mini-4K-Instruct has been integrated in the development version (4.40.0) of 'transformers'. Until the official version is released through 'pip', ensure that you are doing one of the following:
* When loading the model, ensure that 'trust\_remote\_code=True' is passed as an argument of the 'from\_pretrained()' function.
* Update your local 'transformers' to the development version: 'pip uninstall -y transformers && pip install git+URL The previous command is an alternative to cloning and installing from the source.
The current 'transformers' version can be verified with: 'pip list | grep transformers'.
Phi-3 Mini-4K-Instruct is also available in HuggingChat.
### Tokenizer
Phi-3 Mini-4K-Instruct supports a vocabulary size of up to '32064' tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
For example:
where the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
*Some applications/frameworks might not include a BOS token ('~~') at the start of the conversation. Please ensure that it is included since it provides more reliable results.~~*
Responsible AI Considerations
-----------------------------
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
Training
--------
### Model
* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 4K tokens
* GPUs: 512 H100-80G
* Training time: 7 days
* Training data: 3.3T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
### Datasets
Our training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of
1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.
Benchmarks
----------
We report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
Software
--------
* PyTorch
* DeepSpeed
* Transformers
* Flash-Attention
Hardware
--------
Note that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\_pretrained() with attn\_implementation="eager"
* CPU: use the GGUF quantized models 4K
* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K
Cross Platform Support
----------------------
ONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.
Optimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs.
Along with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
License
-------
The model is licensed under the MIT license.
Trademarks
----------
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"### Tokenizer\n\n\nPhi-3 Mini-4K-Instruct supports a vocabulary size of up to '32064' tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.",
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\n*Some applications/frameworks might not include a BOS token ('~~') at the start of the conversation. Please ensure that it is included since it provides more reliable results.~~*\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] | [
"TAGS\n#llamafile #text-generation #en #base_model-microsoft/Phi-3-mini-4k-instruct #license-apache-2.0 #region-us \n",
"### Tokenizer\n\n\nPhi-3 Mini-4K-Instruct supports a vocabulary size of up to '32064' tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.",
"### Chat Format\n\n\nGiven the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows.\nYou can provide the prompt as a question with a generic template as follow:\n\n\nFor example:\n\n\nwhere the model generates the text after '<|assistant|>' . In case of few-shots prompt, the prompt can be formatted as the following:",
"### Sample inference code\n\n\nThis code snippets show how to get quickly started with running the model on a GPU:\n\n\n*Some applications/frameworks might not include a BOS token ('~~') at the start of the conversation. Please ensure that it is included since it provides more reliable results.~~*\n\n\nResponsible AI Considerations\n-----------------------------\n\n\nLike other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:\n\n\n* Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.\n* Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.\n* Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.\n* Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.\n* Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as \"typing, math, random, collections, datetime, itertools\". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.\n\n\nDevelopers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:\n\n\n* Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.\n* High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.\n* Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).\n* Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.\n* Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.\n\n\nTraining\n--------",
"### Model\n\n\n* Architecture: Phi-3 Mini-4K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.\n* Inputs: Text. It is best suited for prompts using chat format.\n* Context length: 4K tokens\n* GPUs: 512 H100-80G\n* Training time: 7 days\n* Training data: 3.3T tokens\n* Outputs: Generated text in response to the input\n* Dates: Our models were trained between February and April 2024\n* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.",
"### Datasets\n\n\nOur training data includes a wide variety of sources, totaling 3.3 trillion tokens, and is a combination of\n\n\n1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;\n2. Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);\n3. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.",
"### Fine-tuning\n\n\nA basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided here.\n\n\nBenchmarks\n----------\n\n\nWe report the results for Phi-3-Mini-4K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Phi-2, Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.\n\n\nAll the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.\n\n\nAs is now standard, we use few-shot prompts to evaluate the models, at temperature 0.\nThe prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.\nMore specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.\n\n\nThe number of k–shot examples is listed per-benchmark.\n\n\n\nSoftware\n--------\n\n\n* PyTorch\n* DeepSpeed\n* Transformers\n* Flash-Attention\n\n\nHardware\n--------\n\n\nNote that by default, the Phi-3-mini model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:\n\n\n* NVIDIA A100\n* NVIDIA A6000\n* NVIDIA H100\n\n\nIf you want to run the model on:\n\n\n* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from\\_pretrained() with attn\\_implementation=\"eager\"\n* CPU: use the GGUF quantized models 4K\n\n\n* Optimized inference on GPU, CPU, and Mobile: use the ONNX models 4K\n\n\nCross Platform Support\n----------------------\n\n\nONNX runtime ecosystem now supports Phi-3 Mini models across platforms and hardware. You can find the optimized Phi-3 Mini-4K-Instruct ONNX model here.\n\n\nOptimized Phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML support lets developers bring hardware acceleration to Windows devices at scale across AMD, Intel, and NVIDIA GPUs. \n\nAlong with DirectML, ONNX Runtime provides cross platform support for Phi-3 across a range of devices CPU, GPU, and mobile.\n\n\nHere are some of the optimized configurations we have added:\n\n\n1. ONNX models for int4 DML: Quantized to int4 via AWQ\n2. ONNX model for fp16 CUDA\n3. ONNX model for int4 CUDA: Quantized to int4 via RTN\n4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN\n\n\nLicense\n-------\n\n\nThe model is licensed under the MIT license.\n\n\nTrademarks\n----------\n\n\nThis project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft’s Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies."
] |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5790
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6908 | 1.0 | 1273 | 3.5831 |
| 3.5749 | 2.0 | 2546 | 3.5787 |
| 3.5283 | 3.0 | 3819 | 3.5790 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "gpt2", "model-index": [{"name": "my_awesome_eli5_clm-model", "results": []}]} | GauravR12060102/my_awesome_eli5_clm-model | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:gpt2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:48:38+00:00 | [] | [] | TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| my\_awesome\_eli5\_clm-model
============================
This model is a fine-tuned version of gpt2 on the eli5\_category dataset.
It achieves the following results on the evaluation set:
* Loss: 3.5790
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.1
* Pytorch 2.2.1+cu121
* Datasets 2.19.0
* Tokenizers 0.19.1
| [
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] | [
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #dataset-eli5_category #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1"
] |
null | transformers | ## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama3-Aloe-8B-Alpha-GGUF/resolve/main/Llama3-Aloe-8B-Alpha.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["biology", "medical"], "datasets": ["argilla/dpo-mix-7k", "nvidia/HelpSteer", "jondurbin/airoboros-3.2", "hkust-nlp/deita-10k-v0", "LDJnr/Capybara", "HPAI-BSC/CareQA", "GBaker/MedQA-USMLE-4-options", "lukaemon/mmlu", "bigbio/pubmed_qa", "openlifescienceai/medmcqa", "bigbio/med_qa"], "base_model": "HPAI-BSC/Llama3-Aloe-8B-Alpha", "quantized_by": "mradermacher"} | mradermacher/Llama3-Aloe-8B-Alpha-GGUF | null | [
"transformers",
"gguf",
"biology",
"medical",
"en",
"dataset:argilla/dpo-mix-7k",
"dataset:nvidia/HelpSteer",
"dataset:jondurbin/airoboros-3.2",
"dataset:hkust-nlp/deita-10k-v0",
"dataset:LDJnr/Capybara",
"dataset:HPAI-BSC/CareQA",
"dataset:GBaker/MedQA-USMLE-4-options",
"dataset:lukaemon/mmlu",
"dataset:bigbio/pubmed_qa",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"base_model:HPAI-BSC/Llama3-Aloe-8B-Alpha",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-04-26T20:49:47+00:00 | [] | [
"en"
] | TAGS
#transformers #gguf #biology #medical #en #dataset-argilla/dpo-mix-7k #dataset-nvidia/HelpSteer #dataset-jondurbin/airoboros-3.2 #dataset-hkust-nlp/deita-10k-v0 #dataset-LDJnr/Capybara #dataset-HPAI-BSC/CareQA #dataset-GBaker/MedQA-USMLE-4-options #dataset-lukaemon/mmlu #dataset-bigbio/pubmed_qa #dataset-openlifescienceai/medmcqa #dataset-bigbio/med_qa #base_model-HPAI-BSC/Llama3-Aloe-8B-Alpha #license-cc-by-nc-4.0 #endpoints_compatible #region-us
| About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
| [] | [
"TAGS\n#transformers #gguf #biology #medical #en #dataset-argilla/dpo-mix-7k #dataset-nvidia/HelpSteer #dataset-jondurbin/airoboros-3.2 #dataset-hkust-nlp/deita-10k-v0 #dataset-LDJnr/Capybara #dataset-HPAI-BSC/CareQA #dataset-GBaker/MedQA-USMLE-4-options #dataset-lukaemon/mmlu #dataset-bigbio/pubmed_qa #dataset-openlifescienceai/medmcqa #dataset-bigbio/med_qa #base_model-HPAI-BSC/Llama3-Aloe-8B-Alpha #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n"
] |
text-generation | transformers | [Phi 3 128K instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) weights in the Llama format.
All credit goes to @vonjack for their useful [conversion script](https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied) as well as index files which were used to make this.
Check out their version for the [4k model](https://huggingface.co/vonjack/Phi-3-mini-4k-instruct-LLaMAfied)! | {"language": ["en"], "license": "mit"} | mrfakename/llamaphi-3-128k-instruct | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-04-26T20:52:04+00:00 | [] | [
"en"
] | TAGS
#transformers #safetensors #llama #text-generation #conversational #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
| Phi 3 128K instruct weights in the Llama format.
All credit goes to @vonjack for their useful conversion script as well as index files which were used to make this.
Check out their version for the 4k model! | [] | [
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null | null |
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | {"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-70B-Instruct", "widget": [{"example_title": "OpenBioLLM-70B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-70B", "results": []}]} | LoneStriker/OpenBioLLM-Llama3-70B-GGUF | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"region:us"
] | null | 2024-04-26T20:56:57+00:00 | [
"2305.18290",
"2303.13375",
"2212.13138",
"2305.09617",
"2402.07023"
] | [
"en"
] | TAGS
#gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #region-us
|

Advancing Open-source Large Language Models in Medical Domain
=============================================================
Online Demo
|
GitHub
|
[](#) |
Discord
!image/jpeg
Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
Biomedical Specialization: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
Superior Performance: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
Advanced Training Techniques: OpenBioLLM-70B builds upon the powerful foundations of the Meta-Llama-3-70B-Instruct and Meta-Llama-3-70B-Instruct models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
</li>
<li>Fine-tuning dataset: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)</li>
</ul>
<p>This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.</p>
<p>️ Release Details:</p>
<ul>
<li>Model Size: 70 billion parameters</li>
<li>Quantization: Optimized quantized versions available Here</li>
<li>Language(s) (NLP): en</li>
<li>Developed By: Ankit Pal (Aaditya Ura) from Saama AI Labs</li>
<li>License: Meta-Llama License</li>
<li>Fine-tuned from models: Meta-Llama-3-70B-Instruct</li>
<li>Resources for more information:
<ul>
<li>Paper: Coming soon</li>
</ul>
</li>
</ul>
<p>The model can be fine-tuned for more specialized tasks and datasets as needed.</p>
<p>OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.</p>
<p>We are excited to share OpenBioLLM-70B with researchers and developers around the world.</p>
<h3>Use with transformers</h3>
<p>Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.</p>
<p>See the snippet below for usage with Transformers:</p>
<h2>Training procedure</h2>
<h3>Training hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>learning_rate: 0.0002</li>
<li>lr_scheduler: cosine</li>
<li>train_batch_size: 12</li>
<li>eval_batch_size: 8</li>
<li>GPU: H100 80GB SXM5</li>
<li>num_devices: 8</li>
<li>optimizer: adamw_bnb_8bit</li>
<li>lr_scheduler_warmup_steps: 100</li>
<li>num_epochs: 4</li>
</ul>
</details>
<h3>Peft hyperparameters</h3>
<details>
<summary>Click to see details</summary>
<ul>
<li>adapter: qlora</li>
<li>lora_r: 128</li>
<li>lora_alpha: 256</li>
<li>lora_dropout: 0.05</li>
<li>lora_target_linear: true</li>
</ul>
<p>-lora_target_modules:</p>
<ul>
<li>q_proj</li>
<li>v_proj</li>
<li>k_proj</li>
<li>o_proj</li>
<li>gate_proj</li>
<li>down_proj</li>
<li>up_proj</li>
</ul>
</details>
<h3>Training results</h3>
<h3>Framework versions</h3>
<ul>
<li>Transformers 4.39.3</li>
<li>Pytorch 2.1.2+cu121</li>
<li>Datasets 2.18.0</li>
<li>Tokenizers 0.15.1</li>
<li>Axolotl</li>
<li>Lm harness for evaluation</li>
</ul>
<h1>Benchmark Results</h1>
<p>OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.</p>
<p>The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.</p>
<p></p>
<div align=)
 from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.</p>
<p>!image/png</p>
<p>Advisory Notice!</p>
<p>While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.</p>
<p>Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.</p>
<p>Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:</p>
<ul>
<li>Thorough testing and evaluation in relevant clinical scenarios</li>
<li>Alignment with evidence-based guidelines and best practices</li>
<li>Mitigation of potential biases and failure modes</li>
<li>Integration with human oversight and interpretation</li>
<li>Compliance with regulatory and ethical standards</li>
</ul>
<p>Always consult a qualified healthcare provider for personal medical needs.</p>
<p>If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:</p>
<p>The accompanying paper is currently in progress and will be released soon.</p>
<div align=)
Contact
--------
We look forward to hearing you and collaborating on this exciting project!
Contributors:
* Ankit Pal (Aaditya Ura) [aadityaura at gmail dot com]
* Saama AI Labs
* Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
References
==========
We thank the Meta Team for their amazing models!
Result sources
* [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (URL
* [2] Med-PaLM-1 Large Language Models Encode Clinical Knowledge
* [3] Med-PaLM-2 Towards Expert-Level Medical Question Answering with Large Language Models
* [4] Gemini-1.0 Gemini Goes to Med School
| [] | [
"TAGS\n#gguf #llama-3 #llama #Mixtral #instruct #finetune #chatml #DPO #RLHF #gpt4 #distillation #en #arxiv-2305.18290 #arxiv-2303.13375 #arxiv-2212.13138 #arxiv-2305.09617 #arxiv-2402.07023 #base_model-meta-llama/Meta-Llama-3-70B-Instruct #license-llama3 #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.