modelId
stringlengths 5
122
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
738M
| likes
int64 0
11k
| library_name
stringclasses 245
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 48
values | createdAt
unknown | card
stringlengths 1
901k
|
---|---|---|---|---|---|---|---|---|---|
FINDA-FIT/llama-2-ko-plain | FINDA-FIT | "2023-09-30T03:50:17Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-09-30T03:32:48Z" | Entry not found |
Jaewoo1/Llama2-7B-ShareGPT-Wiki_noprompt-News_noprompt-CoT-blending-circulus | Jaewoo1 | "2023-10-04T05:48:18Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-04T03:38:30Z" | Entry not found |
DopeorNope/ZeroCoka-7B | DopeorNope | "2023-10-11T10:37:35Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-11T09:51:49Z" | Entry not found |
maywell/Synatra-11B-Testbench-2 | maywell | "2023-10-16T01:21:09Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-15T23:42:55Z" | ---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# **Synatra-11B-Testbench-2**
Made by StableFluffy
**Contact (Do not Contact for personal things.)**
Discord : is.maywell
Telegram : AlzarTakkarsen
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **MISTRAL APACHE 2.0**.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.
## Model Details
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
**Trained On**
A100 80GB * 4
# **Model Benchmark**
X
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
--- |
jiwoochris/ko-llama2-13b-v4 | jiwoochris | "2023-10-22T15:20:25Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T15:04:59Z" | ---
license: mit
---
|
GAI-LLM/ko-en-llama2-13b-mixed-v5 | GAI-LLM | "2023-10-28T07:21:38Z" | 1,322 | 2 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-28T07:04:51Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/ko-en-llama2-13b-mixed-v5**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/ko-en-llama2-13b-mixed-v5 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/ko-en-llama2-13b-mixed-v5
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v5"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
MNCJ1hun/Zephyr-7B-alpha-OP-u1k-ver0.1 | MNCJ1hun | "2023-10-29T13:37:08Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-29T00:19:43Z" | Entry not found |
MNCJ1hun/Mistral-7B-OP-u1k-ver0.4 | MNCJ1hun | "2023-10-30T10:45:54Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"mistralai",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-29T11:42:54Z" | ---
pipeline_tag: text-generation
license: apache-2.0
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
- mistralai
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- Orca-style dataset
- Alpaca-style dataset
### Prompt Template
- Llama Prompt Template
## Contact Us
- [Minds And Company](https://mnc.ai/)
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) |
MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4 | MNC-Jihun | "2023-10-31T07:01:05Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-31T04:52:55Z" | Entry not found |
DopeorNope/mistralopithecus-v2-dpo-7b | DopeorNope | "2023-11-26T09:02:22Z" | 1,322 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-26T08:40:46Z" | Entry not found |
Kaeri-Jenti/llama-2-koen-13b-v1.3 | Kaeri-Jenti | "2023-11-27T00:00:00Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-26T23:51:21Z" | ---
license: llama2
---
|
Ja-ck/Mistral-instruct-DPO-Y24-v2 | Ja-ck | "2023-12-06T23:18:02Z" | 1,322 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-06T22:46:51Z" | ---
license: apache-2.0
pipeline_tag: text-generation
---
## Prompt Template: ChatML
```
<|im_start|>user
{instruction}<|im_end|>
<|im_start|>assistant
{output}<|im_end|>
``` |
Minirecord/minyi_6b | Minirecord | "2023-12-07T10:33:54Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-07T10:30:58Z" | ---
license: apache-2.0
---
|
maywell/Synatra-7B-v0.3-QA | maywell | "2023-12-23T10:46:08Z" | 1,322 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-22T22:13:57Z" | ---
license: cc-by-sa-4.0
---
위키 QA 셋을 2에폭 학습 시킨 모델입니다. 모델 자체의 능력은 저하되었으나 MoE 재료로 쓸 수 있으리라 생각합니다. |
boracious/llama-2-7b-test | boracious | "2023-12-24T13:59:39Z" | 1,322 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-24T13:45:37Z" | Entry not found |
ifuseok/yi-ko-playtus-instruct-v0.2 | ifuseok | "2024-01-11T05:19:06Z" | 1,322 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:nlpai-lab/databricks-dolly-15k-ko",
"dataset:kyujinpy/KOR-OpenOrca-Platypus-v3",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-27T07:04:09Z" | ---
language:
- en
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
datasets:
- nlpai-lab/databricks-dolly-15k-ko
- kyujinpy/KOR-OpenOrca-Platypus-v3
---
**Input** Models input text only.
**Output** Models generate text only.
**Base Model** [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B)
**Training Dataset**
- [nlpai-lab/databricks-dolly-15k-ko](https://huggingface.co/datasets/nlpai-lab/databricks-dolly-15k-ko)
- [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3)
# Implementation Code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "ifuseok/yi-ko-playtus-instruct-v0.2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
# Prompt Example
```
<|system|>
시스템 메시지 입니다. <|endoftext|>
<|user|>
유저 입니다.<|endoftext|>
<|assistant|>
어시스턴트 입니다.<|endoftext|>
``` |
Qdrant/clip-ViT-B-32-text | Qdrant | "2024-04-30T17:18:11Z" | 1,322 | 0 | transformers | [
"transformers",
"onnx",
"clip_text_model",
"endpoints_compatible",
"region:us"
] | null | "2024-04-30T17:07:54Z" | Entry not found |
Eurdem/Defne_llama3_2x8B | Eurdem | "2024-05-16T09:01:01Z" | 1,322 | 5 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"llama-3",
"conversational",
"en",
"tr",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-10T16:19:43Z" | ---
license: llama3
tags:
- moe
- merge
- llama-3
language:
- en
- tr
pipeline_tag: text-generation
library_name: transformers
---
## 💻 For English
Defne_llama3_2x8B is a Mixure of Experts (MoE) (two llama3 models).
(Change the system prompt for Turkish as shown below)
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Eurdem/Defne_llama3_2x8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_8bit= True)
messages = [{"role": "system", "content": "You are a helpful chatbot, named Defne, who always responds friendly."},
{"role": "user", "content": "Answer the questions: 1) Who are you? 2) f(x)=3x^2+4x+12 so what is f(3)?"},
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=1024, do_sample=True, temperature=0.7, top_p=0.7, top_k=500,)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Output
```
Hello there! I'm Defne, a friendly chatbot here to help with any questions you may have.
Now, let's get to the math problem!
The function is f(x) = 3x^2 + 4x + 12, and we want to find f(3). To do that, we can plug in 3 for x in the function:
f(3) = 3(3)^2 + 4(3) + 12
f(3) = 3(9) + 12 + 12
f(3) = 27 + 24
f(3) = 51
So, f(3) is equal to 51!
```
## 💻 Türkçe İçin
Defne_llama3_2x8B, iki llama3 8B modelinin birleşmesi ile oluşturulan MoE yapısında bir modeldir.
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Eurdem/Defne_llama3_2x8B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", load_in_8bit= True)
messages = [{"role": "system", "content": "Sen, Defne isimli Türkçe konuşan bir chatbotsun."},
{"role": "user", "content": "Soruları numaralandırarak cevapla. 1) Sen kimsin? 2)f(x)=3x^2+4x+12 ise f(3) kaçtır?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")
outputs = model.generate(input_ids, max_new_tokens=1024, do_sample=True, temperature=0.7, top_p=0.7, top_k=500,)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
### Çıktı
```
Merhaba!
1. Ben Defne, Türkçe konuşan bir chatbot.
2. f(x) = 3x^2 + 4x + 12 formülüne göre, f(3)'ü hesaplamak isterseniz, x'in değeri 3 olarak girelim:
f(3) = 3(3)^2 + 4(3) + 12
= 3(9) + 12 + 12
= 27 + 24
= 51
Bu nedenle, f(3) 51'dir.
```
|
mii-llm/maestrale-chat-v0.4-alpha-sft | mii-llm | "2024-05-11T23:40:31Z" | 1,322 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"sft",
"it",
"chatml",
"axolotl",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-11T07:20:29Z" | ---
language:
- it
license: cc-by-nc-4.0
tags:
- sft
- it
- mistral
- chatml
- axolotl
prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|>
<|im_start|>assistant
model-index:
- name: maestrale-chat-v0.4-alpha-sft
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/CQc6d7W.jpeg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Maestrale chat alpha ༄
By @efederici and @mferraretto
## Model description
- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus, merged with [occiglot](https://huggingface.co/occiglot/occiglot-7b-eu5).
- **Fine-Tuning**: SFT performed on 1.7M convs/instructions for 2 epochs.
**v0.4**
- Agent
- Improved truthfullness
- Improved Math & Reasoning capabilities
- Mermaid mindmaps
- More latin translations, poems, ...
This model uses ChatML prompt format:
```
<|im_start|>system
Sei un assistente utile.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Scores
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------------|------:|------|-----:|--------|-----:|---|-----:|
|hellaswag_it| 1|none | 0|acc |0.5220|± |0.0052|
| | |none | 0|acc_norm|0.6887|± |0.0048|
|arc_it | 1|none | 0|acc |0.1762|± |0.0111|
| | |none | 0|acc_norm|0.5090|± |0.0146|
|m_mmlu_it | 0|none | 5|acc |0.569 |± |0.0043|
## Usage:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
TextStreamer
)
import torch
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.4-alpha-sft")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.4-alpha-sft", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
do_sample=True,
temperature=0.7,
repetition_penalty=1.2,
top_k=50,
top_p=0.95,
max_new_tokens=500,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
streamer = TextStreamer(tokenizer, skip_prompt=True)
messages = [
{"role": "system", "content": "Sei un assistente utile."},
{"role": "user", "content": "{prompt}"}
]
with torch.no_grad():
temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(temp, return_tensors="pt").to("cuda")
_ = model.generate(
**inputs,
streamer=streamer,
generation_config=gen
)
```
## Intended uses & limitations
It's an alpha version; it's not `safe`, but it can refuse to answer.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
Nara-Lab/nallm-polyglot-ko-1.3b-base | Nara-Lab | "2023-06-28T09:24:15Z" | 1,321 | 2 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-06-22T01:12:03Z" | ---
license: mit
language:
- ko
---
NA-LLM(나름)은 나라지식정보가 개발한 한국어 Large Language Model (LLM) 입니다.
https://github.com/Nara-Information/NA-LLM |
MarkrAI/kyujin-CoTy-platypus-ko-12.8b | MarkrAI | "2023-10-19T13:31:19Z" | 1,321 | 3 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"ko",
"dataset:kyujinpy/KoCoT_2000",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-03T17:56:43Z" | ---
language:
- ko
datasets:
- kyujinpy/KoCoT_2000
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
**(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다**
**The license is `cc-by-nc-sa-4.0`.**
# **CoTy-platypus-ko**

**Poly-platypus-ko + CoT = CoTy-platypus-ko**
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
CoTy-platypus-ko is an auto-regressive language model based on the polyglot-ko transformer architecture.
**Repo Link**
Github CoTy-platypus-ko: [CoTy-platypus-ko](https://github.com/KyujinHan/Poly-platypus-ko)
**Base Model**
[Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b)
**Fine-tuning method**
Methodology by [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus)+[CoT-llama2-ko](https://github.com/Marker-Inc-Korea/CoT-llama2)
**Training Dataset**
I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000).
I use A100 GPU 40GB and COLAB, when trianing.
---
# **Model Bechmark1**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).

| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| CoTy-platypus-ko-12.8b(ours) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 |
| [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 |
| [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 |
| [KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B) | 45.62 | 38.05 | 49.63 | 34.68 | 37.69 | 68.08 |
| [DopeorNope/COLA3-7B](https://huggingface.co/DopeorNope/COLA3-7B) | 45.61 | 39.16 | 50.98 | 35.21 | 37.81 | 64.91 |
> Compare with Top 4 SOTA models. (update: 10/03)
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "MarkrAI/kyujin-CoTy-platypus-ko-12.8b"
CoT-llama = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)
```
> Readme format: [kyujinpy/KoT-platypus2-7B](https://huggingface.co/kyujinpy/KoT-platypus2-7B)
--- |
kiyoonyoo/ko-platypus-13b-control | kiyoonyoo | "2023-10-17T01:08:34Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-16T23:46:29Z" | Entry not found |
Jaewoo1/Platypus7B_Follow_FT | Jaewoo1 | "2023-10-21T09:42:27Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-21T09:28:50Z" | Entry not found |
mncai/Mistral-7B-v0.1-alpaca-1k | mncai | "2023-10-22T05:59:28Z" | 1,321 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"arxiv:2306.02707",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T05:15:15Z" | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- beomi/KoAlpaca-v1.1a
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- beomi/KoAlpaca-v1.1a
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) |
caisarl76/Mistral-7B-orca-1k-platy-1k | caisarl76 | "2023-10-22T15:11:12Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"dataset:kyujinpy/OpenOrca-KO",
"arxiv:2306.02707",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T12:38:41Z" | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/KOpen-platypus
- kyujinpy/OpenOrca-KO
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) |
MNCJihun/Mistral-7B-SlimOrca-orca-platy-1k | MNCJihun | "2023-10-23T07:11:42Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-23T07:04:30Z" | Entry not found |
jyoung105/KoR-Orca-Platypus-13B-neft | jyoung105 | "2023-10-23T17:20:33Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-23T09:40:02Z" | ---
license: cc-by-nc-sa-4.0
---
|
MNCLLM/Mistral-7B-OP-over500-grad1.0 | MNCLLM | "2023-10-25T09:39:14Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T09:14:20Z" | Entry not found |
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.2.1 | krevas | "2023-10-25T14:14:03Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T14:01:52Z" | ---
license: cc-by-nc-4.0
---
|
MNCKim/Mistral-7B-SlimOrca-OP-U2048-ran4k | MNCKim | "2023-10-26T05:07:51Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T04:57:32Z" | Entry not found |
KaeriJenti/ko-llama2-13b-platypus | KaeriJenti | "2023-11-06T00:50:22Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-06T00:24:17Z" | ---
license: llama2
---
|
DopeorNope/COKAL_pre_DPO_Test_v1-13b | DopeorNope | "2023-11-09T19:19:22Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-08T08:53:13Z" | Entry not found |
GAI-LLM/llama-2-koen-13b-mixed-v9 | GAI-LLM | "2023-11-16T02:38:02Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-16T02:25:46Z" | ---
license: cc-by-nc-4.0
---
|
etri-xainlp/llama2-ko-13b-instruct-v1.1 | etri-xainlp | "2023-11-26T04:42:21Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-24T01:33:57Z" | ---
license: apache-2.0
---
# llama2-ko-13b-instruct-v1.1
This model is a fine-tuned version of [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an instruction-following dataset(109,974) |
Minirecord/Mini_Orca_daekeun_llama13b | Minirecord | "2023-11-30T09:16:26Z" | 1,321 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-30T09:10:01Z" | ---
license: cc-by-sa-4.0
---
|
jingyeom/Yi-ko-1.1 | jingyeom | "2023-12-26T01:50:55Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-26T01:46:55Z" | Entry not found |
genne/eclectus1.1 | genne | "2023-12-26T02:22:26Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-26T02:18:27Z" | Entry not found |
jingyeom/Yi-ko-1.2 | jingyeom | "2023-12-28T05:48:36Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-28T05:38:49Z" | Entry not found |
Azazelle/Sina-Loki-7b-Merge | Azazelle | "2024-01-11T00:39:35Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T00:09:47Z" | ---
pipeline_tag: text-generation
tags:
- mistral
- merge
license: cc-by-4.0
---
# Model Card for Sina-Loki-7b-Merge
<!-- Provide a quick summary of what the model is/does. -->
Part of a series of experimental DARE merges.
.yaml file for mergekit
```.yaml:
models:
- model: RatanRohith/SRBOSGPT-7B-slerp
# no parameters necessary for base model
- model: rishiraj/smol-7b #75
parameters:
weight: 0.2
density: 0.41
- model: SanjiWatsuki/openchat-3.5-1210-starling-slerp #125
parameters:
weight: 0.33
density: 0.54
- model: Azazelle/Dumb-Maidlet #200
parameters:
weight: 0.53
density: 0.71
merge_method: dare_ties
base_model: RatanRohith/SRBOSGPT-7B-slerp
parameters:
int8_mask: true
dtype: bfloat16
``` |
shitshow123/stablelm_sft_dpo | shitshow123 | "2024-01-11T05:30:03Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-11T05:23:16Z" | ---
license: apache-2.0
---
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
No model card
New: Create and edit this model card directly on the website!
|
ibndias/Nous-Hermes-2-MoE-2x34B | ibndias | "2024-03-05T01:33:50Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-12T02:52:14Z" | ---
license: apache-2.0
model-index:
- name: Nous-Hermes-2-MoE-2x34B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 76.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.08
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.52
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ibndias/Nous-Hermes-2-MoE-2x34B
name: Open LLM Leaderboard
---
This is an experimental model to make an MoE of Nous Hermes 2 Yi 34B as Mixture of Expert.
The base model is Yi-34B.
All credits belong to NousResearch for fine tuned Yi model, 01-AI for Yi model, and Charles O. Goddard for the 'mergekit'.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ibndias__Nous-Hermes-2-MoE-2x34B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.30|
|AI2 Reasoning Challenge (25-Shot)|66.64|
|HellaSwag (10-Shot) |85.73|
|MMLU (5-Shot) |76.49|
|TruthfulQA (0-shot) |58.08|
|Winogrande (5-shot) |83.35|
|GSM8k (5-shot) |69.52|
|
Kquant03/Buttercup-4x7B-bf16 | Kquant03 | "2024-02-29T02:30:56Z" | 1,321 | 6 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"merge",
"en",
"arxiv:2101.03961",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-22T06:21:27Z" | ---
license: apache-2.0
language:
- en
tags:
- moe
- merge
---

# "[We] are joined by the bonds of love. And you cannot track that, not with a thousand bloodhounds, and you cannot break it, not with a thousand swords."
[GGUF FILES HERE](https://huggingface.co/Kquant03/Buttercup-4x7B-GGUF)
[EXL2 QUANT (Thank you royallab!!!)](https://huggingface.co/royallab/Buttercup-4x7B-exl2)
[Join our Discord!](https://discord.gg/ZgU79QDnE2)
A frankenMoE not only using far better methodology and fundamental understanding of SMoE, but completely focused around intellectual roleplay. It may have a bit of a redundancy issue (I have actually been playing with it while GGUF uploads on q8_k and it has nice variety). However, just in case, to battle this, try to keep things fresh with the model by either introducing new concepts often, or through [drμgs](https://github.com/EGjoni/DRUGS). (no not that kind)
The config looks like this...(detailed version is in the files and versions):
- [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - base
- [fblgit/una-cybertron-7b-v3-OMA](https://huggingface.co/fblgit/una-cybertron-7b-v3-OMA) - expert #1
- [rwitz/go-bruins-v2](https://huggingface.co/rwitz/go-bruins-v2) - expert #2
- [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #3
- [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #4
# Completely mogs mixtral instruct 0.1 across multiple benchmarks at half the size


# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.

Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. |
stablediffusionapi/pornvidion | stablediffusionapi | "2024-02-06T19:24:38Z" | 1,321 | 3 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-02-06T19:21:56Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# PornVidion API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "pornvidion"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/pornvidion)
Model link: [View model](https://modelslab.com/models/pornvidion)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "pornvidion",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
fatgong/5HC6qXCVXJ4Toc9F9UAcMuwmFAizhRkNX3hRbYtg19RwPH7P_vgg | fatgong | "2024-03-20T18:03:03Z" | 1,321 | 0 | keras | [
"keras",
"region:us"
] | null | "2024-03-09T14:11:45Z" | Entry not found |
ibivibiv/llama-3-nectar-dpo-8B | ibivibiv | "2024-05-14T18:56:48Z" | 1,321 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"arxiv:1910.09700",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-14T12:28:30Z" | ---
library_name: transformers
license: llama3
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingfacepremium/phi3-Medium-Medical-Chat-Q4_K_M-GGUF | huggingfacepremium | "2024-06-30T10:27:50Z" | 1,321 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:srikar-v05/phi3-Medium-Medical-Chat",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-30T10:27:13Z" | ---
base_model: srikar-v05/phi3-Medium-Medical-Chat
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
- llama-cpp
- gguf-my-repo
---
# huggingfacepremium/phi3-Medium-Medical-Chat-Q4_K_M-GGUF
This model was converted to GGUF format from [`srikar-v05/phi3-Medium-Medical-Chat`](https://huggingface.co/srikar-v05/phi3-Medium-Medical-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/srikar-v05/phi3-Medium-Medical-Chat) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo huggingfacepremium/phi3-Medium-Medical-Chat-Q4_K_M-GGUF --hf-file phi3-medium-medical-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo huggingfacepremium/phi3-Medium-Medical-Chat-Q4_K_M-GGUF --hf-file phi3-medium-medical-chat-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo huggingfacepremium/phi3-Medium-Medical-Chat-Q4_K_M-GGUF --hf-file phi3-medium-medical-chat-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo huggingfacepremium/phi3-Medium-Medical-Chat-Q4_K_M-GGUF --hf-file phi3-medium-medical-chat-q4_k_m.gguf -c 2048
```
|
haisongzhang/roberta-tiny-cased | haisongzhang | "2021-05-19T17:53:53Z" | 1,320 | 3 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"text-embeddings-inference",
"region:us"
] | feature-extraction | "2022-03-02T23:29:05Z" | Github: https://github.com/haisongzhang/roberta-tiny-cased
|
timm/xception65.ra3_in1k | timm | "2023-04-21T23:44:17Z" | 1,320 | 1 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:2110.00476",
"arxiv:1802.02611",
"arxiv:1610.02357",
"license:apache-2.0",
"region:us"
] | image-classification | "2023-04-21T23:43:33Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
---
# Model card for xception65.ra3_in1k
An Aligned Xception image classification model. Pretrained on ImageNet-1k in `timm` by Ross Wightman using RandAugment `RA3` recipe. Related to `B` recipe in [ResNet Strikes Back](https://arxiv.org/abs/2110.00476).
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 39.9
- GMACs: 14.0
- Activations (M): 52.5
- Image size: 299 x 299
- **Papers:**
- Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation: https://arxiv.org/abs/1802.02611
- Xception: Deep Learning with Depthwise Separable Convolutions: https://arxiv.org/abs/1610.02357
- ResNet strikes back: An improved training procedure in timm: https://arxiv.org/abs/2110.00476
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('xception65.ra3_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xception65.ra3_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 150, 150])
# torch.Size([1, 256, 75, 75])
# torch.Size([1, 728, 38, 38])
# torch.Size([1, 1024, 19, 19])
# torch.Size([1, 2048, 10, 10])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'xception65.ra3_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 10, 10) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Citation
```bibtex
@inproceedings{deeplabv3plus2018,
title={Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation},
author={Liang-Chieh Chen and Yukun Zhu and George Papandreou and Florian Schroff and Hartwig Adam},
booktitle={ECCV},
year={2018}
}
```
```bibtex
@misc{chollet2017xception,
title={Xception: Deep Learning with Depthwise Separable Convolutions},
author={François Chollet},
year={2017},
eprint={1610.02357},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
```bibtex
@inproceedings{wightman2021resnet,
title={ResNet strikes back: An improved training procedure in timm},
author={Wightman, Ross and Touvron, Hugo and Jegou, Herve},
booktitle={NeurIPS 2021 Workshop on ImageNet: Past, Present, and Future}
}
```
|
KRAFTON/KORani-v2-13B | KRAFTON | "2023-05-08T07:23:25Z" | 1,320 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"vicuna",
"KoVicuna",
"KORani",
"ko",
"en",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-26T06:52:01Z" | ---
license: apache-2.0
language:
- ko
- en
pipeline_tag: text-generation
tags:
- vicuna
- llama
- KoVicuna
- KORani
---
# KORani-v2-13B
**`v1,2,3` doesn't mean the best or most recent model**
- KORani: Large Language Models for 🇰🇷 Korean and 🇺🇸 English using LLaMA 13B and Polyglot 12.8B.
- Tested which LLM is effective for 🇰🇷 Korean tasks after finetuning.
- More information at https://github.com/krafton-ai/KORani
- This repository contains fine-tuned language model weights based on LLaMA 13B
## Release
This repository contains inference code for KORani models that are based on [LLaMA 13B](https://arxiv.org/abs/2302.13971v1) and [Polyglot 12.8B](https://huggingface.co/EleutherAI/polyglot-ko-12.8b).
KORani models are finetuned using [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/tree/main) & [KoVicuna](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) dataset. This work is hugely influenced by [Vicuna](https://github.com/lm-sys/FastChat) project.
### Models
| Model | Base | Train dataset | Huggingface Link |
| --- | ---: | ---: | ---: |
| 1️⃣ KORani-v1-13B | Polyglot 12.8B | KoVicuna dataset | [Link 1](https://huggingface.co/KRAFTON/KORani-v1-13B) |
| 2️⃣ KORani-v2-13B | LLaMA 13B | KoVicuna dataset | [Link 2](https://huggingface.co/KRAFTON/KORani-v2-13B) |
| 3️⃣ KORani-v3-13B | LLaMA 13B | ShareGPT & KoVicuna dataset | [Link 3](https://huggingface.co/KRAFTON/KORani-v3-13B) |
## Performances
We used AutoEvalGPT inspired by auto evaluation by GPT-4 from [Vicuna](https://github.com/lm-sys/FastChat).
For how to evaluate, visit this GitHub -> https://github.com/krafton-ai/AutoEvalGPT
### Translation (ENG -> KOR)
```
input = ""Hey! I have some Kissflow Legal Review requests that are blocked by Tax and Accounting, can this be looked at?""
```
| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 헤이! 제가 Tax and Accounting에 의해 차단된 몇 가지 Kissflow 법률 검토 요청이 있는데, 이것을 확인해 주실 수 있나요? |
| DeepL | 9.4 | 안녕하세요! 세무 및 회계에서 차단된 Kissflow 법률 검토 요청이 몇 개 있는데요, 이 요청을 살펴볼 수 있나요? |
| GPT-3.5-turbo | 8.6 | 안녕하세요! 세무 및 회계 부서에서 차단된 몇 가지 Kissflow Legal Review 요청이 있습니다. 확인해 주실 수 있나요? |
| Vicuna-13B | 3.8 | 안녕하세요! 세금계산과 회계부서가 차단해 있는 Kissflow Legal Review 요청이 몇 개가 있습니까? 이것을 살펴보시겠습니까? |
| KoAlpaca-13B | 5.6 | 야! 세금과 회계에 막힌 키싱플로우 법적 검토 요청이 있는데, 이거 검토해줄 수 있어? |
| KORani-v1 | 7.5 | 안녕하세요! 세금과 회계로 인해 막혀 있는 키스플로우 법률 검토 요청이 몇 개 있는데, 검토해 주실 수 있나요? |
| KORani-v2 | 5.4 | 안녕하세요! 제가 Kissflow Legal Review 요청을 목격했는데, 세무 및 회계 부서에서 차단하고 있는데 이 문제를 조사해 주시겠어요? |
| KORani-v3 | 7.1 | 안녕하세요! 저는 Kissflow Legal Review 요청이 세금과 회계에 의해 차단되고 있는데, 이 문제가 살펴볼 수 있을까요? |
### QA(Korean)
```
prompt = "우리는 아래와 같은 정보를 갖고 있습니다.
---------------------
헨리 구스타프 몰래슨(영어: Henry Gustav Molaison, 1926년 2월 26일 ~ 2008년 12월 2일)은 뇌전증을 치료하기 위해서 수술적으로 해마를 포함한 내측측두엽이 제거된 미국의 기억장애 환자 H.M으로 전부터 이미 알려져 있었다.
그는 1957년 말부터 그가 죽을 때 까지 그는 광범위하게 연구가 되었다. 그의 경우는 뇌 기능과 기억 사이의 링크를 설명하는 이론 발달과 뇌인지심리학의 발달, 뇌의 구조와 기능이 특정 심리적 과정과 관한 이해를 목표로 하는 심리학의 한 지점에 매우 중요한 역할을 하였다.
그가 죽기 전에, 그는 코네티컷 주 윈저 락스에 위치한 관리기관에 살면서 지속적으로 조사가 되었다.
그의 뇌는 지금 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있다.
---------------------
### 주어진 정보에 따라, 질문에 답해주세요.: '헨리 구스타프의 뇌는 현재 어디에 위치해 있는가?'
### Assistant:"
```
| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 헨리 구스타프의 뇌는 현재 미국 샌디에이고에 위치해 있다. |
| GPT-3.5-turbo | 7.1 | 헨리 구스타프의 뇌는 지금 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있다. |
| Vicuna-13B | 7.2 | 헨리 구스타프의 뇌는 2009년 12월 4일에 조직학적으로 슬라이스되어 미국 샌디에이고에 있습니다. |
| KoAlpaca-13B | 2.6 | Henry Gustav Molaison의 뇌는 2008년에 사망하였습니다. 그의 뇌는 코네티컷 주 윈저 락스에 위치한 기관에 보관되어 있으며, 현재도 계속 조사중입니다. |
| KORani-v1 | 6.6 | 헨리 구스타프 몰레이슨의 뇌는 현재 미국 샌디에이고에 있습니다. 그의 뇌는 1957년 수술로 제거된 그의 내측 측두엽의 자리를 확인하기 위해 조직학적으로 절단되었습니다. 이 연구는 뇌의 구조와 기능이 특정 심리적 과정과 관련된 이해를 목표로 하는 심리학의 한 분야인 인지 신경과학에 대한 그의 공헌을 기리기 위한 것입니다." |
| KORani-v2 | 8.0 | 헨리 구스타프의 뇌는 현재 캘리포니아 샌디에이고에 있는 샌디에이고 대학교 웨스트 캠퍼스의 신경과학 실험실에 보관되어 있습니다. |
| KORani-v3 | 6.4 | 헨리 구스타프의 뇌는 현재 코네티컷 주 윈저 락스에 있는 관리 기관에 있습니다. |
### Summarization (Korean)
```
prompt = "# Document
전년도 대비 79명 늘어 1019명, 행정수요 대처 광양시의 공무원 정원이 크게 늘어나 행정서비스 향상이 기대된다.
시는 행정안전부에서 발표한 2018년도 자치단체 기준인건비에서 광양시의 일반직 정원이 지난해보다 79명이 늘어난 1019명으로 산정됐다고 밝혔다.
지난 1995년 도농통합 당시 991명이었던 광양시 공무원 정원은 IMF시절에 혹독한 구조조정을 거치면서 2002년에는 788명으로 200여명이 가까이 줄어들었으나 이번 정원 확보로 공무원 정원 1000명 시대를 맞게 됐다.
그동안 광양시는 포스코를 중심으로 한 산업단지와 컨테이너부두, 경제자유구역, 택지 개발, 다양한 복지 정책 등 시민의 삶의 질 향상을 위한 행정수요가 날로 증폭하는 데에 비해 한정된 공무원 정원으로 많은 어려움을 겪어 왔었다.
시의 이번 정원 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역과 나라 발전을 뒷받침 하는 경제도시로서의 위상을 드높이는 시너지 효과로 이어질 것으로 기대된다.
한편, 광양시는 한시기구인‘기업유치추진단’이 2017년 연말로 폐지됨에 따라 전라남도로부터 4급 상설기구인‘산단녹지관리센터’를 승인 받아 올해 1월부터 운영하게 됨으로써 도시 개발에 한층 탄력을 받을 것으로 전망하고 있다.
정홍기 총무과장은“참으로 어렵게 가져온 공무원 정원과 4급 상설기구는 격무에 시달려온 직원들의 숨통을 트이게 해 15만 시민을 위한 행정서비스 질을 향상시킬 것으로 기대된다”며“이달 안에 정원 조례를 개정하고, 직무분석을 통해 부서별 정원을 조정할 계획이다”고 말했다.
한국어 3문장으로 위의 Document를 요약해줘.
### Assistant: 1)"
```
| Model | Score (averaged over 5 examples) | Output Example |
| --- | :---: | ---: |
| GPT-4 | - | 1) 광양시의 공무원 정원이 전년도 대비 79명 늘어난 1019명으로 증가했다. 2) 이로 인해 행정서비스 향상이 기대되며, 지역과 나라 발전을 뒷받침하는 경제도시로서의 위상이 상승할 것으로 전망된다. 3) 한편, 광양시는 전라남도로부터 4급 상설기구인 '산단녹지관리센터'를 승인받아 도시 개발에 탄력을 받을 것으로 예상된다. |
| GPT-3.5-turbo | 8.5 | 1) 광양시 공무원 정원이 79명 늘어 1019명으로 늘어나 행정서비스 향상이 기대된다고 밝혔다.2) 광양시는 이번 정원 충원으로 지역과 나라 발전을 뒷받침 하는 경제도시 위상을 드높일 것으로 기대된다. 3) 정원 조례를 개정하고, 직무분석을 통해 부서별 정원을 조정할 계획이다. |
| Vicuna-13B | 7.4 | 1) 2018년도 광양시의 공무원 정원 1019명, 지난해 79명 증가. 2) 이번 정원 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역과 나라 발전을 뒷받침 하는 경제도시로서의 위상을 드높이는 시너지 효과로 이어질 것으로 기대된다. 3) 광양시는 전라남도로부터 4급 상설기구인‘산단녹지관리센터’를 승인 받아 운영하게 됨으로써 도시 개발에 한층 탄력을 받을 것으로 전망하고 있다. |
| KoAlpaca-13B | 5.7 | 1) 광양시는 작년보다 79명 늘어난 1019명의 공무원을 둘 수 있게 되었습니다. 2) 이는 행정안전부의 지침에 따른 것입니다. 3) 이로써 광양시는 한정된 예산으로 인해 발생했던 공무원들의 초과 근무 수당 지급 등의 문제를 해결 할 수 있게 되었습니다. |
| KORani-v1 | 7.0 | 1) 광양시는 작년보다 79명이 늘어난 1019명의 일반직 공무원 정원을 행안부로부터 승인받았습니다. 2) 이는 광양시의 행정 수요 증가를 수용하기 위한 것입니다. 3) 이 정원 증가는 광양시 공무원들에게 더 많은 기회를 제공할 것입니다. |
| KORani-v2 | 6.2 | 1)광양시의 공무원 정원은 이번 정원 확보로 1000명 주당 공무원 정원을 달성했습니다. 2) 이러한 충원은 지금까지 격무에 시달려온 공무원들에게 더욱 열심히 일할 수 있도록 활력을 불어넣으면서 지역사회와 국가의 발전을 지원하는 시너지 효과로 이어질 것으로 기대됩니다. 3) 광양시는 '기업유치추진단'을 폐지하고 '산단녹지관리센터'를 승인했으며, 이는 도시 개발에 한층 탄력을 받을 것으로 기대됩니다. |
| KORani-v3 | 6.9 | 1)광양시의 공무원 정원은 지난해보다 79명이 늘어난 1019명으로 산정되었습니다.2) 이러한 충원은 지역사회와 국가 발전을 지원하는 행정서비스의 향상으로 이어질 것으로 기대됩니다.3) 시는 '이달 내에 정원 조례를 개정하고 직무분석을 통해 부서별로 정원을 조정할 계획'이며 행정서비스 향상을 위해 노력할 것이라고 밝혔습니다. |
## License
Our github repo and models are intended for research purpose, non-commercial use only, subject to the model License of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us If you find any potential violation.
The code is released under the Apache License 2.0. |
topel/ConvNeXt-Tiny-AT | topel | "2023-10-04T14:17:07Z" | 1,320 | 8 | null | [
"safetensors",
"audio tagging",
"audio events",
"audio embeddings",
"convnext-audio",
"audioset",
"license:mit",
"region:us"
] | null | "2023-09-22T10:39:50Z" | ---
license: mit
tags:
- audio tagging
- audio events
- audio embeddings
- convnext-audio
- audioset
inference: false
---
**ConvNeXt-Tiny-AT** is an audio tagging CNN model, trained on **AudioSet** (balanced+unbalanced subsets). It reached 0.471 mAP on the test set [(Paper)](https://www.isca-speech.org/archive/interspeech_2023/pellegrini23_interspeech.html).
The model was trained on audio recordings of duration 10 seconds, and sample rate 32kHz, but you can provide any audio file, we have included resampling and padding/cropping in the following code snippet.
The model provides logits and probabilities for the 527 audio event tags of AudioSet (see http://research.google.com/audioset/index.html).
Two methods can also be used to get scene embeddings (a single vector per file) and frame-level embeddings, see below.
The scene embedding is obtained from the frame-level embeddings, on which mean pooling is applied onto the frequency dim, followed by mean pooling + max pooling onto the time dim.
# Install
This code is based on our repo: https://github.com/topel/audioset-convnext-inf
You can pip install it:
```bash
pip install git+https://github.com/topel/audioset-convnext-inf@pip-install
```
# Usage
Below is an example of how to instantiate the model, make tag predictions on an audio sample, and get embeddings (scene and frame levels).
```python
import os
import numpy as np
import torch
from torch.nn import functional as TF
import torchaudio
import torchaudio.functional as TAF
from audioset_convnext_inf.pytorch.convnext import ConvNeXt
from audioset_convnext_inf.utils.utilities import read_audioset_label_tags
model = ConvNeXt.from_pretrained("topel/ConvNeXt-Tiny-AT", map_location='cpu')
print(
"# params:",
sum(param.numel() for param in model.parameters() if param.requires_grad),
)
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
if "cuda" in str(device):
model = model.to(device)
```
Output:
```
# params: 28222767
```
## Inference: get logits and probabilities
To run the following, first download ```254906__tpellegrini__cavaco1.wav``` and ```class_labels_indices.csv``` from this repository.
```python
sample_rate = 32000
audio_target_length = 10 * sample_rate # 10 s
# AUDIO_FNAME = "f62-S-v2swA_200000_210000.wav"
AUDIO_FNAME = "254906__tpellegrini__cavaco1.wav"
current_dir=os.getcwd()
AUDIO_FPATH = os.path.join(current_dir, AUDIO_FNAME)
waveform, sample_rate_ = torchaudio.load(AUDIO_FPATH)
if sample_rate_ != sample_rate:
print("Resampling from %d to 32000 Hz"%sample_rate_)
waveform = TAF.resample(
waveform,
sample_rate_,
sample_rate,
)
if waveform.shape[-1] < audio_target_length:
print("Padding waveform")
missing = max(audio_target_length - waveform.shape[-1], 0)
waveform = TF.pad(waveform, (0,missing), mode="constant", value=0.0)
elif waveform.shape[-1] > audio_target_length:
print("Cropping waveform")
waveform = waveform[:, :audio_target_length]
waveform = waveform.contiguous()
waveform = waveform.to(device)
print("\nInference on " + AUDIO_FNAME + "\n")
with torch.no_grad():
model.eval()
output = model(waveform)
logits = output["clipwise_logits"]
print("logits size:", logits.size())
probs = output["clipwise_output"]
# Equivalent: probs = torch.sigmoid(logits)
print("probs size:", probs.size())
lb_to_ix, ix_to_lb, id_to_ix, ix_to_id = read_audioset_label_tags(os.path.join(current_dir, "class_labels_indices.csv"))
threshold = 0.25
sample_labels = np.where(probs[0].clone().detach().cpu() > threshold)[0]
print("\nPredicted labels using activity threshold 0.25:\n")
# print(sample_labels)
for l in sample_labels:
print("%s: %.3f"%(ix_to_lb[l], probs[0,l]))
```
Output:
```
Inference on 254906__tpellegrini__cavaco1.wav
Resampling rate from 44100 to 32000 Hz
Padding waveform
logits size: torch.Size([1, 527])
probs size: torch.Size([1, 527])
Predicted labels using activity threshold 0.25:
[137 138 139 140 149 151]
Music: 0.896
Musical instrument: 0.686
Plucked string instrument: 0.608
Guitar: 0.369
Mandolin: 0.710
Ukulele: 0.268
```
Technically speaking, it's not a Mandolin nor a Ukulele, but a Brazilian cousin, the cavaquinho!
## Get audio scene embeddings
```python
with torch.no_grad():
model.eval()
output = model.forward_scene_embeddings(waveform)
print("\nScene embedding, shape:", output.size())
```
Output:
```
Scene embedding, shape: torch.Size([1, 768])
```
## Get frame-level embeddings
```python
with torch.no_grad():
model.eval()
output = model.forward_frame_embeddings(waveform)
print("\nFrame-level embeddings, shape:", output.size())
```
Output:
```
Frame-level embeddings, shape: torch.Size([1, 768, 31, 7])
```
# Zenodo
The checkpoint is also available on Zenodo: https://zenodo.org/record/8020843/files/convnext_tiny_471mAP.pth?download=1
# Citation
[Paper available](https://www.isca-speech.org/archive/interspeech_2023/pellegrini23_interspeech.html)
Cite as: Pellegrini, T., Khalfaoui-Hassani, I., Labbé, E., Masquelier, T. (2023) Adapting a ConvNeXt Model to Audio Classification on AudioSet. Proc. INTERSPEECH 2023, 4169-4173, doi: 10.21437/Interspeech.2023-1564
```bibtex
@inproceedings{pellegrini23_interspeech,
author={Thomas Pellegrini and Ismail Khalfaoui-Hassani and Etienne Labb\'e and Timoth\'ee Masquelier},
title={{Adapting a ConvNeXt Model to Audio Classification on AudioSet}},
year=2023,
booktitle={Proc. INTERSPEECH 2023},
pages={4169--4173},
doi={10.21437/Interspeech.2023-1564}
}
```
|
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.1.12 | krevas | "2023-10-20T03:37:13Z" | 1,320 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-19T23:15:22Z" | ---
license: cc-by-nc-4.0
---
# LDCC-Instruct-Llama-2-ko-13B
<img src="./assets/icon.png" alt="image" width="50%" height="auto">
## Model Details
* **Developed by**: [Lotte Data Communication](https://www.ldcc.co.kr)
## Hardware and Software
* **Hardware**: We utilized an A100x8 * 1 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
## Prompt Template
```
### Prompt:
{instruction}
### Answer:
{output}
```
# **Llama 2**
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
## Model Details
*Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
**Model Developers** Meta
**Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
||Training Data|Params|Content Length|GQA|Tokens|LR|
|---|---|---|---|---|---|---|
|Llama 2|*A new mix of publicly available online data*|7B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|13B|4k|✗|2.0T|3.0 x 10<sup>-4</sup>|
|Llama 2|*A new mix of publicly available online data*|70B|4k|✔|2.0T|1.5 x 10<sup>-4</sup>|
*Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Dates** Llama 2 was trained between January 2023 and July 2023.
**Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
**License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
## Intended Use
**Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
**Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
## Hardware and Software
**Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
**Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|---|---|---|---|
|Llama 2 7B|184320|400|31.22|
|Llama 2 13B|368640|400|62.44|
|Llama 2 70B|1720320|400|291.42|
|Total|3311616||539.00|
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
## Training Data
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
## Evaluation Results
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|---|---|---|---|---|---|---|---|---|---|
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama 1|7B|27.42|23.00|
|Llama 1|13B|41.74|23.08|
|Llama 1|33B|44.19|22.57|
|Llama 1|65B|48.71|21.77|
|Llama 2|7B|33.29|**21.25**|
|Llama 2|13B|41.86|26.10|
|Llama 2|70B|**50.18**|24.60|
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|||TruthfulQA|Toxigen|
|---|---|---|---|
|Llama-2-Chat|7B|57.04|**0.00**|
|Llama-2-Chat|13B|62.18|**0.00**|
|Llama-2-Chat|70B|**64.14**|0.01|
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
## Ethical Considerations and Limitations
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
## Reporting Issues
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
## Llama Model Index
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|---|---|---|---|---|
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)| |
mncai/Mistral-7B-v0.1-platy-1k | mncai | "2023-10-22T04:57:06Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"MindsAndCompany",
"en",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"arxiv:2306.02707",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T04:44:03Z" | ---
pipeline_tag: text-generation
license: mit
language:
- en
- ko
library_name: transformers
tags:
- MindsAndCompany
datasets:
- kyujinpy/KOpen-platypus
---
## Model Details
* **Developed by**: [Minds And Company](https://mnc.ai/)
* **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1)
* **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
## Dataset Details
### Used Datasets
- kyujinpy/KOpen-platypus
### Prompt Template
- Llama Prompt Template
## Limitations & Biases:
Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
## License Disclaimer:
This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
## Contact Us
- [Minds And Company](https://mnc.ai/)
## Citiation:
Please kindly cite using the following BibTeX:
```bibtex
@misc{mukherjee2023orca,
title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
year={2023},
eprint={2306.02707},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```
@misc{Orca-best,
title = {Orca-best: A filtered version of orca gpt4 dataset.},
author = {Shahul Es},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/},
}
```
```
@software{touvron2023llama2,
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava,
Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann,
Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov,
Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith,
Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom},
year={2023}
}
```
> Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1) |
GAI-LLM/ko-en-llama2-13b-mixed-v3 | GAI-LLM | "2023-10-27T00:43:02Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T22:36:08Z" | ---
license: cc-by-nc-2.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-2.0`.**
# **GAI-LLM/ko-en-llama2-13b-mixed-v3**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
ko-en-llama2-13b-mixed-v3 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- Kopen-platypus + kaist_cot_deepL
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/ko-en-llama2-13b-mixed-v3
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v3"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
|
MNCJihun/Mistral-7B-OpenOrca-eng-kor-combined | MNCJihun | "2023-10-24T01:08:54Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-24T01:01:23Z" | Entry not found |
MNCLLM/Mistral-7B-KoCot-Platypus-4096 | MNCLLM | "2023-10-24T10:53:22Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-24T10:46:02Z" | Entry not found |
DILAB-HYU/koquality-polyglot-3.8b | DILAB-HYU | "2023-11-05T11:48:50Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"polyglot-ko",
"gpt-neox",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"base_model:EleutherAI/polyglot-ko-3.8b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T04:03:33Z" | ---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- polyglot-ko
- gpt-neox
- KoQuality
base_model: EleutherAI/polyglot-ko-3.8b
---
This model is a instruct-tuned EleutherAI/polyglot-ko-3.8b model.
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading (384GB)
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
## Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5 |
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.2.8 | krevas | "2023-11-07T12:38:18Z" | 1,320 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-31T23:41:57Z" | ---
license: cc-by-nc-4.0
language:
- ko
---
# Model Card for LDCC-Instruct-Llama-2-ko-13B-v4.2.8
## Developed by : Wonchul Kim ([Lotte Data Communication](https://www.ldcc.co.kr) AI Technical Team)
## Hardware and Software
* **Hardware**: We utilized an A100x8 * 1 for training our model
* **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
## Base Model : [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
### Training Data
The LDCC-Instruct-Llama-2-ko-13B model was trained with publicly accessible Korean/English data sources. For its fine-tuning, we utilized other public data and underwent some processing and refinement.
We did not incorporate any client data owned by Lotte Data Communication.
## Prompt Template
```
### Prompt:
{instruction}
### Answer:
{output}
``` |
MNC-Jihun/Mistral-11B-OP-u1k-ver0.7 | MNC-Jihun | "2023-11-01T00:48:53Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-01T00:29:06Z" | Entry not found |
Kaeri-Jenti/llama-2-koen-13b-with-ko-wiki | Kaeri-Jenti | "2023-11-08T11:00:39Z" | 1,320 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-08T09:59:41Z" | ---
license: llama2
---
|
aerdincdal/CBDDO-LLM-8B-Instruct-v1 | aerdincdal | "2024-05-03T11:36:24Z" | 1,320 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"tr",
"dataset:aerdincdal/CBDDO-LLM-DB-V1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-02T07:38:51Z" | ---
license: mit
datasets:
- aerdincdal/CBDDO-LLM-DB-V1
language:
- tr
metrics:
- accuracy
- bertscore
- bleu
- bleurt
- brier_score
- cer
- character
- charcut_mt
- chrf
- code_eval
---
## LLama3 Tabanlı Türkçe Dil Modeli: aerdincdal/CBDDO-LLM-8B-Instruct-v1
**aerdincdal/CBDDO-LLM-8B-Instruct-v1**, LLama3 mimarisi üzerine kurulu ve 2.5 milyon satırlık veri kümesi ile özelleştirilmiş Instruction Tune yöntemi kullanılarak eğitilmiş bir Türkçe dil modelidir. Bu model, doğal dil işleme alanında çeşitli görevleri etkili bir şekilde gerçekleştirebilir. Modelin eğitimi, Türkçe dilbilgisi ve sentaks kurallarını derinlemesine kavramasını sağlamış, böylece akıcı ve doğru metinler üretmesine olanak tanımıştır.
**Modelin Öne Çıkan Özellikleri:**
- **Gelişmiş LLama3 Mimarisi:** Bu mimari, doğal dil işleme modelleri için son derece etkili ve yenilikçi bir temel oluşturur.
- **Kapsamlı Veri Seti ile Eğitim:** Model, 2.5 milyon satırlık veri seti kullanılarak eğitilmiştir, bu da onun dil yapısını ve nüanslarını mükemmel bir şekilde öğrenmesini sağlar.
- **Yüksek Performans:** Model, karmaşık dil işleme görevlerini hızlı ve etkin bir şekilde gerçekleştirebilir.
- **Çok Yönlülük:** Metin oluşturma, çeviri, soru-cevap, özetleme ve kod yazma gibi çok çeşitli görevlerde başarılıdır.
### Modelin Kullanım Adımları:
1. **Gerekli Kütüphaneleri Yükleyin:**
```bash
pip install transformers
```
2. **Modeli Test Edin:**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline
import torch
model_id = "aerdincdal/CBDDO-LLM-8B-Instruct-v1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
text_generation_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
messages = [
{"role": "system", "content": "Her zaman düşünceli yanıtlar veren bir chatbot'sun."},
{"role": "user", "content": "Mona Lisa tablosu hakkında ne düşünüyorsun?"}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id
]
outputs = text_generation_pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95
)
print(outputs[0]["generated_text"][len(prompt):])
```
**Çıktı:**
```
1503'te Leonardo da Vinci tarafından resmedilen Mona Lisa, 16. yüzyılda Avrupa'da resim sanatının en ünlü eserlerinden biridir. Eski bir İtalyan aristokratı olan Lisa del Giocondo'ya benzeyen bir kadın portresidir. Bu tablo, Leonardo da Vinci'nin en ünlü eserlerinden biri olarak kabul edilir ve sanatın en iyi örneklerinden biri olarak kabul edilir. Mona Lisa'nın önemi, resim sanatının gelişiminde ve sanat tarihi boyunca etkisinin büyüklüğüne dayanmaktadır.
```
### Modelin Çeşitli Kullanım Alanları:
- **Metin Oluşturma:** Çeşitli türde ve tonda metinler oluşturabilirsiniz.
- **Metin Çevirme:** Çok dilli çeviri yetenekleri ile metinleri başka dillere çevirebilir veya tercüme edebilirsiniz.
- **Soruları Yanıtlama:** Her türlü soruyu, hatta en zorlayıcı olanları bile yanıtlayabilir.
- **Özetleme:** Uzun metinleri kısa ve öz bir şekilde özetleyebilirsiniz.
- **Kod Yazma:** Verilen isteklere uygun olarak kod üretebilirsiniz.
### Kod Yazma Örneği:
Bu örnekte, model bir metni büyük harfe çeviren bir Python fonksiyonu yazmaktadır:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer, pipeline
import torch
model_id = "aerdincdal/CBDDO-LLM-8B-Instruct-v1"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_id,
trust_remote_code=True
)
streamer = TextStreamer(tokenizer)
text_generation_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
model_kwargs={"torch_dtype": torch.bfloat16},
streamer=streamer
)
messages = [
{"role": "system", "content": "Her zaman düşünceli yanıtlar veren bir chatbot'sun."},
{"role": "user", "content": "Python ile bir metni büyük harfe çeviren bir fonksiyon yaz."}
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
tokenizer.eos_token_id
]
outputs = text_generation_pipeline(
prompt,
max_new_tokens=2048,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.95
)
print(outputs[0]["generated_text"][len(prompt):])
```
**Çıktı:**
```python
def metni_buyuk_harfe_cevir(metin):
"""Bir metni tümüyle büyük harfe çeviren Python fonksiyonu.
Args:
metin: Küçük harflerle yazılmış bir metin.
Returns:
Büyük harflerle yazılmış metin.
"""
return metin.upper()
# Örnek kullanım
metin = "Bu bir deneme metnidir."
buyuk_harf_metin = metni_buyuk_harfe_cevir(metin)
print(buyuk_harf_metin)
```
**Açıklama:**
Model, verilen istemi ("Python ile bir metni büyük harfe çeviren bir fonksiyon yaz.") işleyerek, açıklamaları ve dokümantasyonu içeren tam teşekküllü bir Python kodunu oluşturur. Bu fonksiyon, küçük harflerle yazılmış herhangi bir metni büyük harflere çevirebilir, böylece metinler üzerinde kolay manipülasyon sağlar.
Bu basit adımlarla, Türkçe doğal dil işleme yeteneklerinin sınırlarını zorlayabilir ve dil modelimizin size nasıl yardımcı olabileceğini keşfedebilirsiniz. Bizimle bu teknoloji yolculuğuna çıkın ve dil işleme kapasitenizi genişletin!
**BENCHMARK:**
```json
"config_general": {
"lighteval_sha": "494ee12240e716e804ae9ea834f84a2c864c07ca",
"num_few_shot_default": 0,
"num_fewshot_seeds": 1,
"override_batch_size": 1,
"max_samples": null,
"job_id": "",
"start_time": 1781075.607155059,
"end_time": 1784655.466140587,
"total_evaluation_time_secondes": "3579.858985528117",
"model_name": "aerdincdal/CBDDO-LLM-8B-Instruct-v1",
"model_sha": "84430552036c85cc6a16722b26496df4d93f3afe",
"model_dtype": "torch.bfloat16",
"model_size": "15.08 GB"
},
"results": {
"harness|arc:challenge|25": {
"acc": 0.4991467576791809,
"acc_stderr": 0.014611369529813262,
"acc_norm": 0.5460750853242321,
"acc_norm_stderr": 0.014549221105171872
},
"harness|hellaswag|10": {
"acc": 0.5552678749253137,
"acc_stderr": 0.004959204773046207,
"acc_norm": 0.7633937462656841,
"acc_norm_stderr": 0.004241299341050841
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5986842105263158,
"acc_stderr": 0.039889037033362836,
"acc_norm": 0.5986842105263158,
"acc_norm_stderr": 0.039889037033362836
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.02794321998933714,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.02794321998933714
}
``` |
THUDM/cogvlm2-llama3-chinese-chat-19B | THUDM | "2024-05-25T13:09:38Z" | 1,320 | 63 | transformers | [
"transformers",
"safetensors",
"text-generation",
"chat",
"cogvlm2",
"conversational",
"custom_code",
"en",
"arxiv:2311.03079",
"license:other",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-05-16T11:51:31Z" | ---
license: other
license_name: cogvlm2
license_link: https://huggingface.co/THUDM/cogvlm2-llama3-chinese-chat-19B/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- chat
- cogvlm2
inference: false
---
# CogVLM2
<div align="center">
<img src=https://raw.githubusercontent.com/THUDM/CogVLM2/53d5d5ea1aa8d535edffc0d15e31685bac40f878/resources/logo.svg width="40%"/>
</div>
<p align="center">
👋 <a href="resources/WECHAT.md" target="_blank">Wechat</a> · 💡<a href="http://36.103.203.44:7861/" target="_blank">Online Demo</a> · 🎈<a href="https://github.com/THUDM/CogVLM2" target="_blank">Github Page</a>
</p>
<p align="center">
📍Experience the larger-scale CogVLM model on the <a href="https://open.bigmodel.cn/dev/api#glm-4v">ZhipuAI Open Platform</a>.
</p>
## Model introduction
We launch a new generation of **CogVLM2** series of models and open source two models built with [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). Compared with the previous generation of CogVLM open source models, the CogVLM2 series of open source models have the following improvements:
1. Significant improvements in many benchmarks such as `TextVQA`, `DocVQA`.
2. Support **8K** content length.
3. Support image resolution up to **1344 * 1344**.
4. Provide an open source model version that supports both **Chinese and English**.
You can see the details of the CogVLM2 family of open source models in the table below:
| Model name | cogvlm2-llama3-chat-19B | cogvlm2-llama3-chinese-chat-19B |
|------------------|-------------------------------------|-------------------------------------|
| Base Model | Meta-Llama-3-8B-Instruct | Meta-Llama-3-8B-Instruct |
| Language | English | Chinese, English |
| Model size | 19B | 19B |
| Task | Image understanding, dialogue model | Image understanding, dialogue model |
| Text length | 8K | 8K |
| Image resolution | 1344 * 1344 | 1344 * 1344 |
## Benchmark
Our open source models have achieved good results in many lists compared to the previous generation of CogVLM open source models. Its excellent performance can compete with some non-open source models, as shown in the table below:
| Model | Open Source | LLM Size | TextVQA | DocVQA | ChartQA | OCRbench | MMMU | MMVet | MMBench |
|--------------------------------|-------------|----------|----------|----------|----------|----------|----------|----------|----------|
| CogVLM1.1 | ✅ | 7B | 69.7 | - | 68.3 | 590 | 37.3 | 52.0 | 65.8 |
| LLaVA-1.5 | ✅ | 13B | 61.3 | - | - | 337 | 37.0 | 35.4 | 67.7 |
| Mini-Gemini | ✅ | 34B | 74.1 | - | - | - | 48.0 | 59.3 | 80.6 |
| LLaVA-NeXT-LLaMA3 | ✅ | 8B | - | 78.2 | 69.5 | - | 41.7 | - | 72.1 |
| LLaVA-NeXT-110B | ✅ | 110B | - | 85.7 | 79.7 | - | 49.1 | - | 80.5 |
| InternVL-1.5 | ✅ | 20B | 80.6 | 90.9 | **83.8** | 720 | 46.8 | 55.4 | **82.3** |
| QwenVL-Plus | ❌ | - | 78.9 | 91.4 | 78.1 | 726 | 51.4 | 55.7 | 67.0 |
| Claude3-Opus | ❌ | - | - | 89.3 | 80.8 | 694 | **59.4** | 51.7 | 63.3 |
| Gemini Pro 1.5 | ❌ | - | 73.5 | 86.5 | 81.3 | - | 58.5 | - | - |
| GPT-4V | ❌ | - | 78.0 | 88.4 | 78.5 | 656 | 56.8 | **67.7** | 75.0 |
| CogVLM2-LLaMA3 (Ours) | ✅ | 8B | 84.2 | **92.3** | 81.0 | 756 | 44.3 | 60.4 | 80.5 |
| CogVLM2-LLaMA3-Chinese (Ours) | ✅ | 8B | **85.0** | 88.4 | 74.7 | **780** | 42.8 | 60.5 | 78.9 |
All reviews were obtained without using any external OCR tools ("pixel only").
## Quick Start
here is a simple example of how to use the model to chat with the CogVLM2 model. For More use case. Find in our [github](https://github.com/THUDM/CogVLM2)
```python
import torch
from PIL import Image
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "THUDM/cogvlm2-llama3-chinese-chat-19B"
DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'
TORCH_TYPE = torch.bfloat16 if torch.cuda.is_available() and torch.cuda.get_device_capability()[0] >= 8 else torch.float16
tokenizer = AutoTokenizer.from_pretrained(
MODEL_PATH,
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=TORCH_TYPE,
trust_remote_code=True,
).to(DEVICE).eval()
text_only_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:"
while True:
image_path = input("image path >>>>> ")
if image_path == '':
print('You did not enter image path, the following will be a plain text conversation.')
image = None
text_only_first_query = True
else:
image = Image.open(image_path).convert('RGB')
history = []
while True:
query = input("Human:")
if query == "clear":
break
if image is None:
if text_only_first_query:
query = text_only_template.format(query)
text_only_first_query = False
else:
old_prompt = ''
for _, (old_query, response) in enumerate(history):
old_prompt += old_query + " " + response + "\n"
query = old_prompt + "USER: {} ASSISTANT:".format(query)
if image is None:
input_by_model = model.build_conversation_input_ids(
tokenizer,
query=query,
history=history,
template_version='chat'
)
else:
input_by_model = model.build_conversation_input_ids(
tokenizer,
query=query,
history=history,
images=[image],
template_version='chat'
)
inputs = {
'input_ids': input_by_model['input_ids'].unsqueeze(0).to(DEVICE),
'token_type_ids': input_by_model['token_type_ids'].unsqueeze(0).to(DEVICE),
'attention_mask': input_by_model['attention_mask'].unsqueeze(0).to(DEVICE),
'images': [[input_by_model['images'][0].to(DEVICE).to(TORCH_TYPE)]] if image is not None else None,
}
gen_kwargs = {
"max_new_tokens": 2048,
"pad_token_id": 128002,
}
with torch.no_grad():
outputs = model.generate(**inputs, **gen_kwargs)
outputs = outputs[:, inputs['input_ids'].shape[1]:]
response = tokenizer.decode(outputs[0])
response = response.split("<|end_of_text|>")[0]
print("\nCogVLM2:", response)
history.append((query, response))
```
## License
This model is released under the CogVLM2 [LICENSE](LICENSE). For models built with Meta Llama 3, please also adhere to the [LLAMA3_LICENSE](LLAMA3_LICENSE).
## Citation
If you find our work helpful, please consider citing the following papers
```
@misc{wang2023cogvlm,
title={CogVLM: Visual Expert for Pretrained Language Models},
author={Weihan Wang and Qingsong Lv and Wenmeng Yu and Wenyi Hong and Ji Qi and Yan Wang and Junhui Ji and Zhuoyi Yang and Lei Zhao and Xixuan Song and Jiazheng Xu and Bin Xu and Juanzi Li and Yuxiao Dong and Ming Ding and Jie Tang},
year={2023},
eprint={2311.03079},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
timm/res2net50_26w_4s.in1k | timm | "2023-04-24T00:05:02Z" | 1,319 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"arxiv:1904.01169",
"license:unknown",
"region:us"
] | image-classification | "2023-04-24T00:04:44Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: unknown
datasets:
- imagenet-1k
---
# Model card for res2net50_26w_4s.in1k
A Res2Net (Multi-Scale ResNet) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 25.7
- GMACs: 4.3
- Activations (M): 12.6
- Image size: 224 x 224
- **Papers:**
- Res2Net: A New Multi-scale Backbone Architecture: https://arxiv.org/abs/1904.01169
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/gasvn/Res2Net/
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('res2net50_26w_4s.in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_26w_4s.in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 64, 112, 112])
# torch.Size([1, 256, 56, 56])
# torch.Size([1, 512, 28, 28])
# torch.Size([1, 1024, 14, 14])
# torch.Size([1, 2048, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'res2net50_26w_4s.in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 2048, 7, 7) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{gao2019res2net,
title={Res2Net: A New Multi-scale Backbone Architecture},
author={Gao, Shang-Hua and Cheng, Ming-Ming and Zhao, Kai and Zhang, Xin-Yu and Yang, Ming-Hsuan and Torr, Philip},
journal={IEEE TPAMI},
doi={10.1109/TPAMI.2019.2938758},
}
```
|
heegyu/42dot_LLM-PLM-1.3B-mt | heegyu | "2023-10-19T04:40:58Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-03T00:13:19Z" | # WIP
- 아직 작업중입니다.. 모델에 문제가 좀 있음 ㅠ
original model: [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B)
## 사용 예시
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "heegyu/42dot_LLM-PLM-1.3B-mt"
model = AutoModelForCausalLM.from_pretrained(model_id).eval().half()
tokenizer = AutoTokenizer.from_pretrained(model_id)
if torch.cuda.is_available():
device = "cuda:0"
model.to(device)
else:
device = "cpu"
@torch.no_grad()
def generate_text(prompt):
input_ids = tokenizer.encode(prompt, return_tensors="pt", add_special_tokens=False).to(device)
output_ids = model.generate(input_ids, min_new_tokens=4, max_length=1024, early_stopping=True)
output_ids = output_ids.cpu()[0][len(input_ids[0]):]
print(tokenizer.decode(output_ids))
bos, eos = tokenizer.bos_token, tokenizer.eos_token
# 한 -> 영
text = "삼성전자가 갤럭시 스마트폰·태블릿 전용 ‘클라우드 게임 플랫폼’을 이르면 이달 공개한다. 전 세계 10억 명의 갤럭시 사용자가 콘솔(게임기)을 구매하거나 게임 앱을 내려받지 않아도 스마트폰을 통해 실시간으로 유명 게임을 즐길 수 있게 되는 것이다. 기기 판매에 의존하지 않고 안정적인 서비스 수익을 올리려는 삼성전자의 ‘신사업 승부수’란 평가가 나온다."
generate_text(f"{bos} {text} {eos} ")
# ㈜Samsung Electronics will release the Cloud Game Platform for Galaxy smartphones and tablets in early this month, allowing users of 1 billion people around the world to enjoy famous games on their smartphones in real time without buying consoles or downloading game apps. It is said to be a 'business move' by Samsung Electronics, which is trying to earn stable service revenue without relying on sales.<|endoftext|>
# 영 -> 한, 마지막 문장 짤렸음
text = """Samsung Electronics will unveil a "cloud game platform" exclusively for Galaxy smartphones and tablets as early as this month. One billion Galaxy users around the world will be able to enjoy famous games in real time through smartphones without having to purchase consoles or download game apps. Analysts say that Samsung Electronics is a "new business winning move" to earn stable service profits without relying on device sales."""
generate_text(f"{bos} {text} {eos} ")
# NC는 이달 중 갤럭시 스마트폰과 태블릿 전용 '클라우드 게임 플랫폼'을 독점 공개할 예정인데, 전 세계 1억명의 갤럭시 사용자들은 콘솔이나 게임 앱 다운로드 없이 스마트폰을 통해 유명 게임을 실시간으로 즐길 수 있게 됐다.<|endoftext|>
# 영 -> 한, 앞에 번역한 단어를 지정할 수 있다.
text = """Samsung Electronics will unveil a "cloud game platform" exclusively for Galaxy smartphones and tablets as early as this month."""
generate_text(f"{bos} Samsung Electronics {eos} 삼성전자 {eos} {bos} {text} {eos} ")
# N가전 삼성전자가 갤럭시 스마트폰과 태블릿 전용 '클라우드 게임 플랫폼'을 이달 중으로 공개한다.<|endoftext|>
```
## 모델 평가
```
python main.py \
--model hf-causal \
--model_args pretrained=heegyu/42dot_LLM-PLM-1.3B-mt \
--tasks kobest_hellaswag,kobest_copa,kobest_boolq,kobest_sentineg \
--device cuda:0
```
- boolq, copa, hellaswag은 원본 모델보다 감소했다.
- sentineg는 크게 향상
hf-causal (pretrained=heegyu/42dot_LLM-PLM-1.3B-mt), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|----------------|------:|--------|-----:|---|-----:|
|kobest_boolq | 0|acc |0.5021|± |0.0133|
| | |macro_f1|0.3343|± |0.0059|
|kobest_copa | 0|acc |0.6640|± |0.0149|
| | |macro_f1|0.6633|± |0.0149|
|kobest_hellaswag| 0|acc |0.4020|± |0.0219|
| | |acc_norm|0.5220|± |0.0224|
| | |macro_f1|0.3974|± |0.0218|
|kobest_sentineg | 0|acc |0.8010|± |0.0201|
| | |macro_f1|0.8003|± |0.0201| |
mncai/mistral-7b-ko-1871-2p1 | mncai | "2023-10-06T09:56:53Z" | 1,319 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:ttagu99/ko_f_1871",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-05T05:19:11Z" | ---
license: apache-2.0
datasets:
- ttagu99/ko_f_1871
pipeline_tag: text-generation
--- |
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.1.8 | krevas | "2023-10-20T00:58:07Z" | 1,319 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-20T00:49:10Z" | ---
license: cc-by-nc-4.0
---
|
nakhyeon/llama-ko-qlora-1024 | nakhyeon | "2023-10-21T07:56:40Z" | 1,319 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-21T07:20:49Z" | ---
license: mit
---
|
nayohan/polyglot-ko-1.3b-Inst | nayohan | "2023-10-26T10:41:19Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"polyglot-ko",
"gpt-neox",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"base_model:EleutherAI/polyglot-ko-1.3b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-21T12:08:17Z" | ---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- polyglot-ko
- gpt-neox
- KoQuality
base_model: EleutherAI/polyglot-ko-1.3b
---
This model is a instruct-tuned poylglot-ko-1.3b model, using only 1% of [Kullm, OIG, KoAlpaca] Instruction dataset.
len10_k100_mppl_n0.1.json
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + No Offloading
- num_devices: 2
- gradient_accumulation_steps: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
## Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5 |
kiyoonyoo/ko-en-trans-platypus-13b-v3 | kiyoonyoo | "2023-10-22T06:13:34Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-22T06:06:36Z" | Entry not found |
nayohan/llama-2-ko-7b-Inst | nayohan | "2023-10-26T10:44:28Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llama-2-ko",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"base_model:beomi/llama-2-ko-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T04:31:11Z" | ---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- llama-2-ko
- KoQuality
base_model: beomi/llama-2-ko-7b
---
This model is a instruct-tuned llama-2-ko-7b model, using only 10% of [Kullm, OIG, KoAlpaca] Instruction dataset.
len10_k100_mppl_n0.1.json -> 121step
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
## Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5 |
MNCLLM/Mistral-7B-OP-over1k-grad0.3 | MNCLLM | "2023-10-25T09:38:41Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T08:51:15Z" | Entry not found |
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.2.2 | krevas | "2023-10-26T10:40:20Z" | 1,319 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T10:32:40Z" | ---
license: cc-by-nc-4.0
---
|
jiwoochris/ko-llama2-13b-v6 | jiwoochris | "2023-10-28T12:30:04Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-28T12:19:56Z" | Entry not found |
MNC-Jihun/Mistral-7B-A-u0.5-b2-ver0.4 | MNC-Jihun | "2023-10-31T04:56:52Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-31T04:49:14Z" | Entry not found |
jiwoochris/llama2_tmt-13b-v1 | jiwoochris | "2023-11-02T08:52:31Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-02T08:43:52Z" | Entry not found |
cepiloth/ko-en-llama2-13b-finetune-ex | cepiloth | "2023-11-04T10:50:23Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-04T10:11:35Z" | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
---
# Model Trained Using AutoTrain |
genne/otter3.1.3n_7b | genne | "2023-11-10T01:32:21Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-10T01:24:15Z" | Entry not found |
LDCC/LDCC-Instruct-Llama-2-ko-13B-v1.6 | LDCC | "2023-11-13T07:21:47Z" | 1,319 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-13T07:16:23Z" | ---
license: cc-by-nc-4.0
---
|
kyujinpy/Ko-PlatYi-6B-kiwi | kyujinpy | "2023-12-09T13:23:21Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOR-Orca-Platypus-kiwi",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-03T19:28:27Z" | ---
language:
- ko
datasets:
- kyujinpy/KOR-Orca-Platypus-kiwi
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Ko-PlatYi-6B-kiwi**
<img src='./Ko-PlatYi.png' width=256>
## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
Ko-PlatYi-6B-kiwi is an auto-regressive language model based on the Yi-34B transformer architecture.
**Blog Link**
Blog: [Coming soon...]
Github: [Coming soon...]
**Base Model**
[beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B)
**Training Dataset**
[kyujinpy/KOR-Orca-Platypus-kiwi](https://huggingface.co/datasets/kyujinpy/KOR-Orca-Platypus-kiwi).
# **Model Benchmark**
## Open leaderboard
> Follow up as [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | CommonGen-V2 |
| --- | --- | --- | --- | --- | --- | --- |
| Ko-PlatYi-6B-O | 49.00 | 43.52 | 53.59 | 47.47 | 41.01 | 59.39 |
| **Ko-PlatYi-6B-kiwi** | 48.75 | 41.98 | 53.61 | 46.10 | 38.30 | 63.75 |
| Ko-PlatYi-6B-gu | 48.76 | 42.75 | 54.00 | 44.66 | 41.22 | 61.16 |
| Ko-PlatYi-6B | 49.97 | 43.00 | 53.55 | 46.50 | 40.31 | 66.47 |
| Yi-Ko-6B | 48.79 | 41.04 | 53.39 | 46.28 | 41.64 | 61.63 |
---
## AI-Harness Evaluation
> AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness)
| Model | BoolQ | Copa | HellaSwag | Sentineg |
| --- | --- | --- | --- | --- |
| | *Zero-shot* ||||
| Ko-PlatYi-6B-O | 0.3343 | 0.7687 | 0.4833 | 0.5794 |
| **Ko-PlatYi-6B-kiwi** | 0.3343 | 0.7665 | 0.4746 | **0.6248** |
| Ko-PlatYi-6B-gu | **0.7077** | **0.7696** | 0.4797 | 0.3979 |
| Ko-PlatYi-6B | 0.3343 | 0.7684 | **0.4917** | 0.5226 |
| Yi-Ko-6B | **0.7070** | 0.7696 | **0.5009** | 0.4044 |
---
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Ko-PlatYi-6B-kiwi"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
GAI-LLM/llama-2-koen-13b-dpo-v3 | GAI-LLM | "2023-12-05T00:39:54Z" | 1,319 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-05T00:12:29Z" | ---
license: cc-by-nc-4.0
---
|
Minirecord/minyi_dpo_6b | Minirecord | "2023-12-18T09:00:29Z" | 1,319 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-18T08:40:37Z" | ---
license: apache-2.0
---
|
StatPan/SinGung7B-DPO-v0.1-2200 | StatPan | "2023-12-26T12:40:41Z" | 1,319 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-26T12:36:24Z" | Entry not found |
ValiantLabs/Fireplace-13b | ValiantLabs | "2024-02-18T19:35:58Z" | 1,319 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"fireplace",
"function-calling",
"code",
"code-instruct",
"valiant",
"valiant-labs",
"llama-2",
"llama-2-chat",
"13b",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-01-18T19:20:11Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- fireplace
- function-calling
- code
- code-instruct
- valiant
- valiant-labs
- llama
- llama-2
- llama-2-chat
- 13b
model_type: llama
license: apache-2.0
---

Fireplace-13b is a function calling model built on the Llama 2 architecture.
- Built on llama-2-13b architecture, using [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) as the base model.
- Emphasizes function calling and code-instruct as skills.
- Version 1.1 improves output structure for a superior user experience.
(If you're looking for a friendly general-purpose chat model, try ours: [llama-13b](https://huggingface.co/ValiantLabs/ShiningValiantXS) and [70b](https://huggingface.co/ValiantLabs/ShiningValiant))
## Version
This is Version **1.1** of Fireplace-13b.
The current version of Fireplace-13b uses [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) trained on [glaive-function-calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2).
Fireplace is the first release in our Build Tools campaign, to deliver helpful open source capabilities for users and creators.
**The next release in our Build Tools series will be coming soon, with an initial release at 70b parameters** - we're very excited to bring this to everyone!
We're also working to bring Fireplace to larger model architectures, to maximize baseline model capability and function-calling performance.
## Prompting Guide
Fireplace-13b specializes in function calling and code instruct/chat.
See [CodeLlama-13b-Instruct-hf](codellama/CodeLlama-13b-Instruct-hf) for code capabilities of the base model.
For function calling in this version of the model, the recommended format is to deliver the function(s) in a system message and then proceed with chat:
SYSTEM: You are Fireplace, an expert code assistant with access to the following functions. Use them if required -
{
""name"": ""function_name"",
}
USER: Can you (do thing from function)?
ASSISTANT:
Assistant will deliver function call responses between \<functioncall> and <|endoftext|>:

(Please note that <|endoftext|> is not an EOS/EOT token, it is used to indicate the end of function call responses specifically.)
For handling of function call responses, append "FUNCTION RESPONSE: " to the existing chat history:

Fireplace is optimized for function/code capabilities and not general chat, but it has also been trained to utilize general instruct-chat capabilities:
SYSTEM: You are a helpful assistant.
USER: user chat input
ASSISTANT:
The model may be subject to errors and limitations, including those of the base model and dataset. We offer Fireplace-13b as open source for all to use. The user is responsible for all outputs.

Fireplace is created by [Valiant Labs.](http://valiantlabs.ca/)
Try our flagship chat model, [Shining Valiant!](https://huggingface.co/ValiantLabs/ShiningValiant)
[Follow us on X for updates on our models!](https://twitter.com/valiant_labs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models. |
QuantFactory/HALU-8B-LLAMA3-BRSLURP-GGUF | QuantFactory | "2024-06-08T11:20:49Z" | 1,319 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"text-generation",
"base_model:Hastagaras/HALU-8B-LLAMA3-BRSLURP",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-08T09:41:51Z" | ---
base_model: Hastagaras/HALU-8B-LLAMA3-BRSLURP
library_name: transformers
tags:
- mergekit
- merge
pipeline_tag: text-generation
---
# QuantFactory/HALU-8B-LLAMA3-BRSLURP-GGUF
This is quantized version of [Hastagaras/HALU-8B-LLAMA3-BRSLURP](https://huggingface.co/Hastagaras/HALU-8B-LLAMA3-BRSLURP) created suing llama.cpp
# Model Description
You can see the Halu 0.35 model details in [HERE](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.35)
So two different models with different base models...a fusion of OpenAI and MetaAI, truthful QA gonna be tough.
After some testing, I think this super duper easy merge that I did while I was half asleep is actually pretty decent.
After another testing...the Blackroot influence is way smoother than the Anjir, probably because the base models are different, so...no duplicate layers, I guess.
Works better with around 0.95-1.1 temp.
**EDIT:** I think this is too safe, I don't like it...
### Models Merged
The following models were included in the merge:
* [Hastagaras/Halu-8B-Llama3-v0.35](https://huggingface.co/Hastagaras/Halu-8B-Llama3-v0.35)
* [Hastagaras/Halu-8B-Llama3-Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Hastagaras/Halu-8B-Llama3-v0.35
layer_range: [0,32]
- model: Hastagaras/Halu-8B-Llama3-Blackroot
layer_range: [0,32]
merge_method: slerp
base_model: Hastagaras/Halu-8B-Llama3-v0.35
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.5, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.5, 0.5, 0]
- value: 0.5
dtype: bfloat16
``` |
timm/vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k | timm | "2023-05-06T00:52:57Z" | 1,318 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:imagenet-1k",
"dataset:imagenet-21k",
"arxiv:2106.10270",
"arxiv:2010.11929",
"license:apache-2.0",
"region:us"
] | image-classification | "2022-12-23T00:34:35Z" | ---
tags:
- image-classification
- timm
library_name: timm
license: apache-2.0
datasets:
- imagenet-1k
- imagenet-21k
---
# Model card for vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k
A ResNet - Vision Transformer (ViT) hybrid image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k (with additional augmentation and regularization) in JAX by paper authors, ported to PyTorch by Ross Wightman.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 6.3
- GMACs: 0.4
- Activations (M): 1.9
- Image size: 224 x 224
- **Papers:**
- How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers: https://arxiv.org/abs/2106.10270
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- **Dataset:** ImageNet-1k
- **Pretrain Dataset:** ImageNet-21k
- **Original:** https://github.com/google-research/vision_transformer
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_tiny_r_s16_p8_224.augreg_in21k_ft_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 50, 192) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@article{steiner2021augreg,
title={How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers},
author={Steiner, Andreas and Kolesnikov, Alexander and and Zhai, Xiaohua and Wightman, Ross and Uszkoreit, Jakob and Beyer, Lucas},
journal={arXiv preprint arXiv:2106.10270},
year={2021}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
beomi/kollama-13b | beomi | "2023-06-28T03:23:51Z" | 1,318 | 16 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"KoLLAMA",
"KoreanGPT",
"ko",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-04-14T01:20:46Z" | ---
license: mit
language:
- ko
- en
metrics:
- perplexity
- accuracy
pipeline_tag: text-generation
tags:
- llama
- KoLLAMA
- KoreanGPT
---
> 🚧 Note: this repo is under construction 🚧
## Todo
✅ - finish
⏳ - currently working on it
- ✅ Train new BBPE Tokenizer
- ✅ Test train code on TPUv4 Pods (with model parallel)
- ✅ Converting test (jax to PyTorch)
- ✅ LM train validation on minimal dataset (1 sentence 1000 step)
- ⏳ Build Data Shuffler (curriculum learning)
- ⏳ Train 7B Model
- ⏳ Train 13B Model
- Train 33B Model
- Train 65B Model
# KoLLaMA-13B Model Card
KoLLaMA (13B) trained on Korean/English/Code dataset with LLaMA Architecture via JAX,
with the warm support from [Google TPU Research Cloud program](https://sites.research.google/trc/about/) for providing part of the computation resources.
## Model details
**Researcher developing the model**
Junbum Lee (aka Beomi)
**Model date**
KoLLaMA was trained between 2022.04~
**Model version**
This is alpha version of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
(This repo contains 13B model!)
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
More info for KoAlpaca:
[TBD]
**Citations details**
KoLLAMA: [TBD]
LLAMA: https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
MIT
**Where to send questions or comments about the model**
Questions and comments about KoLLaMA can be sent via the [GitHub repository](https://github.com/beomi/KoLLAMA) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of KoLLaMA is research on Korean Opensource large language models
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
## Evaluation datasets
[TBD]
## Training dataset
[TBD]
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content. |
destitech/controlnet-inpaint-dreamer-sdxl | destitech | "2024-04-23T20:20:39Z" | 1,318 | 79 | diffusers | [
"diffusers",
"safetensors",
"art",
"controlnet",
"stable-diffusion",
"stable-diffusion-xl",
"image-to-image",
"license:openrail",
"region:us"
] | image-to-image | "2023-09-24T19:50:47Z" | ---
license: openrail
tags:
- art
- controlnet
- stable-diffusion
- stable-diffusion-xl
- image-to-image
---
# Controlnet - Inpainting dreamer
This ControlNet has been conditioned on **Inpainting** and **Outpainting**.
**It is an early alpha version made by experimenting in order to learn more about controlnet.**
**You want to support this kind of work and the development of this model ? Feel free to [buy me a coffee](https://www.buymeacoffee.com/destitech) !**
It is designed to work with [Stable Diffusion XL](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl). It should work with any model based on it.
**The image to inpaint or outpaint is to be used as input of the controlnet in a txt2img pipeline with denoising set to 1.0. The part to in/outpaint should be colors in solid white.**
Depending on the prompts, the rest of the image might be kept as is or modified more or less.
## Model Details
- **Developed by:** [Destitech](https://destitech.com)
- **Model type:** Controlnet
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
## Released Checkpoints
[Model link](./models/diffusion_pytorch_model.safetensors)
[Model link - fp16 version - Built by OzzyGT](./models/diffusion_pytorch_model.safetensors)
## Usage with Diffusers
OzzyGT made a really good guide on how to use this model for outpainting, give it a try [Here](https://github.com/huggingface/diffusers/discussions/7482) !
A big thank you to him for pointing me out how to name the files for diffusers compatibility and for the fp16 version, you should be able to use it this way with both normal and fp16 version:
```python
from diffusers import ControlNetModel
import torch
controlnet = ControlNetModel.from_pretrained(
"destitech/controlnet-inpaint-dreamer-sdxl", torch_dtype=torch.float16, variant="fp16"
)
```
## Usage with ComfyUI
[Workflow link](./workflows/workflow.json)
<a href="./workflows/workflow-preview.png"><img style="margin:0;padding:0;" src="./workflows/workflow-preview.png"/></a>
<br/>
<a href="./workflows/masked.png"><img width="256" style="margin:0;padding:0;" src="./workflows/masked.png"/></a>
<a href="./workflows/output_cyberpunk_manor.png"><img width="256" style="margin:0;padding:0;" src="./workflows/output_cyberpunk_manor.png"/></a>
<a href="./workflows/output_casual_woman.png"><img width="256" style="margin:0;padding:0;" src="./workflows/output_casual_woman.png"/></a>
## More examples
<a href="./tests/test1.jpeg"><img width="768" style="margin:0;padding:0;" src="./tests/test1-thumb.jpeg"/></a>
<br/>
<a href="./tests/test2.jpeg"><img width="768" style="margin:0;padding:0;" src="./tests/test2-thumb.jpeg"/></a>
|
MNC-Jihun/Mistral-11B-Omni-OP-u1k-ver0.5 | MNC-Jihun | "2023-10-30T02:45:26Z" | 1,318 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T02:37:17Z" | Entry not found |
DILAB-HYU/koquality-ko-ref-llama2-7b | DILAB-HYU | "2023-11-05T11:58:10Z" | 1,318 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"KoQuality",
"ko",
"dataset:DILAB-HYU/KoQuality",
"base_model:hyunseoki/ko-ref-llama2-7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-30T04:29:55Z" | ---
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- KoQuality
- llama
base_model: hyunseoki/ko-ref-llama2-7b
---
This model is a instruct-tuned hyunseoki/ko-ref-llama2-7b model using koquality.
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 1
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading (384GB)
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
## Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5 |
MNC-LLM/Mistral-11B-Omni-OPA-u1k-ver0.7 | MNC-LLM | "2023-11-02T00:54:23Z" | 1,318 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-02T00:35:56Z" | Entry not found |
OpenBuddy/openbuddy-llemma-34b-v13.2 | OpenBuddy | "2023-11-09T11:52:24Z" | 1,318 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"ru",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-11-04T05:20:00Z" | ---
language:
- zh
- en
- fr
- de
- ja
- ko
- it
- ru
pipeline_tag: text-generation
inference: false
library_name: transformers
license: llama2
---
# OpenBuddy - Open Multilingual Chatbot
GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy)
Website and Demo: [https://openbuddy.ai](https://openbuddy.ai)
Evaluation result of this model: [Evaluation.txt](Evaluation.txt)

# Copyright Notice
Base model: https://huggingface.co/EleutherAI/llemma_34b
License: llama2
This model is built upon Meta's LLaMA series of models and is subject to Meta's licensing agreement.
This model is intended for use only by individuals who have obtained approval from Meta and are eligible to download LLaMA.
If you have not obtained approval from Meta, you must visit the https://ai.meta.com/llama/ page, read and agree to the model's licensing agreement, submit an application, and wait for approval from Meta before downloading the model from this page.
## Disclaimer
All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions.
OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.
By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy.
## 免责声明
所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。
OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。
使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。 |
Minirecord/llama13b_test02 | Minirecord | "2023-12-01T09:34:52Z" | 1,318 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-12-01T09:28:58Z" | ---
license: apache-2.0
---
|
cleanrl/EleutherAI_pythia-1b-deduped__sft__tldr | cleanrl | "2024-05-15T02:46:01Z" | 1,318 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2024-05-07T20:02:02Z" | Entry not found |
SalimBou5/dpo_model | SalimBou5 | "2024-06-03T15:38:41Z" | 1,318 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-7b-bnb-4bit",
"region:us"
] | null | "2024-06-03T15:31:00Z" | ---
library_name: peft
base_model: unsloth/gemma-7b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** OSY (Yasmine Chaker, Oussama Gabouj, Salim Boussofara)
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Causal LM
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
quantumaikr/KoreanLM-3B | quantumaikr | "2023-09-02T12:55:53Z" | 1,317 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"korean",
"foundation",
"ko",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-08-21T09:02:18Z" | ---
language:
- ko
- en
pipeline_tag: text-generation
tags:
- llama
- korean
- foundation
---
<p align="center" width="100%">
<img src="https://i.imgur.com/snFDU0P.png" alt="KoreanLM icon" style="width: 500px; display: block; margin: auto; border-radius: 10%;">
</p>
# KoreanLM: 한국어 언어모델 프로젝트
KoreanLM은 한국어 언어모델을 개발하기 위한 오픈소스 프로젝트입니다. 현재 대부분의 언어모델들은 영어에 초점을 맞추고 있어, 한국어에 대한 학습이 상대적으로 부족하고 토큰화 과정에서 비효율적인 경우가 있습니다. 이러한 문제를 해결하고 한국어에 최적화된 언어모델을 제공하기 위해 KoreanLM 프로젝트를 시작하게 되었습니다.
## 프로젝트 목표
1. 한국어에 특화된 언어모델 개발: 한국어의 문법, 어휘, 문화적 특성을 반영하여 한국어를 더 정확하게 이해하고 생성할 수 있는 언어모델을 개발합니다.
2. 효율적인 토큰화 방식 도입: 한국어 텍스트의 토큰화 과정에서 효율적이고 정확한 분석이 가능한 새로운 토큰화 방식을 도입하여 언어모델의 성능을 향상시킵니다.
3. 거대 언어모델의 사용성 개선: 현재 거대한 사이즈의 언어모델들은 기업이 자사의 데이터를 파인튜닝하기 어려운 문제가 있습니다. 이를 해결하기 위해 한국어 언어모델의 크기를 조절하여 사용성을 개선하고, 자연어 처리 작업에 더 쉽게 적용할 수 있도록 합니다.
## 사용 방법
다음은 transformers 라이브러리를 통해 모델과 토크나이저를 로딩하는 예제입니다.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained("quantumaikr/KoreanLM-3B")
tokenizer = transformers.AutoTokenizer.from_pretrained("quantumaikr/KoreanLM-3B")
```
## 기술 문의
[email protected]
www.quantumai.kr |
giacomoarienti/nsfw-classifier | giacomoarienti | "2024-05-30T16:41:33Z" | 1,317 | 6 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"dataset:deepghs/nsfw_detect",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-09-05T12:19:30Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: nsfw-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9200000166893005
datasets:
- deepghs/nsfw_detect
---
# nsfw-classifier
NSFW Classifier using [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k)
|
momo/polyglot-ko-12.8b-Chat-QLoRA-Merge_v3 | momo | "2023-10-03T02:10:03Z" | 1,317 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-02T13:54:50Z" | ---
license: apache-2.0
---
|
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.1.2 | krevas | "2023-10-18T00:12:37Z" | 1,317 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-18T00:00:23Z" | ---
license: cc-by-nc-4.0
---
|
kyujinpy/Kosy-Platypus2-13B | kyujinpy | "2023-11-02T01:52:25Z" | 1,317 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"dataset:kyujinpy/KOpen-platypus",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-24T11:59:34Z" | ---
language:
- ko
datasets:
- kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---
# **Kosy🍵llama**

## Model Details
**Model Developers** Kyujin Han (kyujinpy)
**Model Description**
[NEFTune](https://github.com/neelsjain/NEFTune) method를 활용하여 훈련한 Ko-platypus2 new version!
(Noisy + KO + llama = Kosy🍵llama)
**Repo Link**
Github **KoNEFTune**: [Kosy🍵llama](https://github.com/Marker-Inc-Korea/KoNEFTune)
If you visit our github, you can easily apply **Random_noisy_embedding_fine-tuning**!!
**Base Model**
[hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
Version of combined dataset: [kyujinpy/KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus)
I use A100 GPU 40GB and COLAB, when trianing.
# **Model comparisons**
[KO-LLM leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)
# **NEFT comparisons**

| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| [Ko-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 45.60 | 44.20 | 54.31 | 42.47 | 44.41 | 42.62 |
| *NEFT(🍵kosy)+MLP-v1 | 43.64 | 43.94 | 53.88 | 42.68 | 43.46 | 34.24 |
| *NEFT(🍵kosy)+MLP-v2 | 45.45 | 44.20 | 54.56 | 42.60 | 42.68 | 42.98 |
| [***NEFT(🍵kosy)+MLP-v3**](https://huggingface.co/kyujinpy/Kosy-platypus2-13B-v3) | 46.31 | 43.34 | 54.54 | 43.38 | 44.11 | 46.16 |
| NEFT(🍵kosy)+Attention | 44.92 |42.92 | 54.48 | 42.99 | 43.00 | 41.20 |
| NEFT(🍵kosy) | 45.08 | 43.09 | 53.61 | 41.06 | 43.47 | 43.21 |
> *Different Hyperparameters such that learning_rate, batch_size, epoch, etc...
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "kyujinpy/Koisy-Platypus2-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
MNCJihunKim/Mistral-7B-OpenOrca-orca-platy-out1kover | MNCJihunKim | "2023-10-28T15:03:18Z" | 1,317 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-25T12:08:40Z" | Entry not found |
GAI-LLM/ko-en-llama2-13b-mixed-v4 | GAI-LLM | "2023-10-27T00:44:35Z" | 1,317 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2023-10-26T04:15:49Z" | ---
license: cc-by-nc-4.0
language:
- ko
library_name: transformers
pipeline_tag: text-generation
---
**The license is `cc-by-nc-4.0`.**
# **GAI-LLM/ko-en-llama2-13b-mixed-v4**
## Model Details
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
GAI-LLM/ko-en-llama2-13b-mixed-v4 is an auto-regressive language model based on the LLaMA2 transformer architecture.
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
**Training Dataset**
- We combined Open Korean Dateset using mixed-strategy.
- Kopen-platypus + kaist_cot_deepL + open_orca-ko (NIV + FLAN + TO)
- We use A100 GPU 80GB * 8, when training.
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
# Implementation Code
```python
### GAI-LLM/ko-en-llama2-13b-mixed-v4
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v4"
model = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
```
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.