modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
gangkongkong/llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge
gangkongkong
"2023-11-01T12:32:41Z"
1,317
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-01T12:28:03Z"
Entry not found
jin05102518/Astral-7B-1.0Epoch-Instruct-v0.06
jin05102518
"2023-11-03T02:20:58Z"
1,317
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-02T15:11:34Z"
--- license: cc-by-nc-4.0 ---
JYKIM-AI/Mistral-7B-SFT-v0.1
JYKIM-AI
"2023-11-20T10:24:00Z"
1,317
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-20T09:40:12Z"
Entry not found
Ja-ck/Mistral-instruct-Y24-v5
Ja-ck
"2023-11-24T03:55:15Z"
1,317
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ko", "dataset:kyujinpy/OpenOrca-KO", "dataset:beomi/KoAlpaca-v1.1a", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-24T03:29:32Z"
--- license: apache-2.0 datasets: - kyujinpy/OpenOrca-KO - beomi/KoAlpaca-v1.1a language: - ko library_name: transformers pipeline_tag: text-generation --- ## Prompt Teample ``` ### 질문: {instruction} ### 답변: {output} ```
HumanF-MarkrAI/mistralopithecus-v3-dpo-7b
HumanF-MarkrAI
"2023-11-26T11:08:51Z"
1,317
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-26T10:45:51Z"
Entry not found
PracticeLLM/Custom-KoLLM-13B-v7
PracticeLLM
"2023-12-02T16:26:18Z"
1,317
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-30T05:53:37Z"
--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **⭐My custom LLM 13B⭐** ## Model Details **Model Developers** - Kyujin Han (kyujinpy) **Model Architecture** - My custom LLM 13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** - [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) **Training Dataset** - [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). --- # Model comparisons > Ko-LLM leaderboard(11/27; [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)) | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | ⭐My custom LLM 13B-v1⭐ | **50.19** | **45.99** | 56.93 | 41.78 | 41.66 | **64.58** | | ⭐My custom LLM 13B-v4⭐ | 49.89 | 45.05 | **57.06** | 41.83 | **42.93** | 62.57 | | **⭐My custom LLM 13B-v7⭐** | 49.11 | 45.90 | 56.80 | **41.92** | 41.42 | 59.50 | --- # Model comparisons2 > AI-Harness evaluation; [link](https://github.com/Beomi/ko-lm-evaluation-harness) | Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | | ⭐My custom LLM 13B-v1⭐ | 0.7987 | 0.8269 | 0.4994 | 0.5660 | 0.3343 | 0.5060 | 0.6984 | 0.9723 | | ⭐My custom LLM 13B-v4⭐** | **0.7988** | 0.8279 | **0.4995** | 0.4953 | 0.3343 | 0.3558 | **0.7825** | 0.9698 | | **⭐My custom LLM 13B-v7⭐** | 0.7958 | 0.8289 | 0.4944 | 0.4932 | **0.3359** | 0.4696 | 0.4876 | 0.9748 | | [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b) | 0.7768 | 0.8128 | 0.4999 | 0.5127 | 0.3988 | 0.7038 | 0.5870 | 0.9748 | --- # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "PracticeLLM/Custom-KoLLM-13B-v7" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` # Hyperparameters - QLoRA - lora_target_modules '[gate_proj, down_proj, up_proj]' - lora_r 64
Minirecord/llama13b_dpo_loss0_OTL
Minirecord
"2023-12-06T05:33:15Z"
1,317
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-06T05:26:32Z"
--- license: apache-2.0 ---
jjourney1125/llama2-13b-v0.1
jjourney1125
"2023-12-16T13:30:38Z"
1,317
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-16T11:01:47Z"
--- license: apache-2.0 ---
hermes42/Mistral-7B-Instruct-v0.3-imatrix-GGUF
hermes42
"2024-05-22T20:59:44Z"
1,317
3
null
[ "gguf", "nlp", "code", "imatrix", "text-generation", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-22T18:18:47Z"
--- license: apache-2.0 pipeline_tag: text-generation tags: - nlp - code - gguf - imatrix --- GGUF quants of https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3 with importance matrix calculations run on group_10_merged.txt for improved perplexity. Quantified with llama.cpp as of commitid 03d8900ebe062355e26a562379daee5f17ea099f from 2024-05-22 Original Model Card below: # Model Card for Mistral-7B-Instruct-v0.3 The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3. Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2/edit/main/README.md) - Extended vocabulary to 32768 - Supports v3 Tokenizer - Supports function calling ## Installation It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', '7B-Instruct-v0.3') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Mistral-7B-Instruct-v0.3", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using ``` mistral-chat $HOME/mistral_models/7B-Instruct-v0.3 --instruct --max_tokens 256 ``` ### Instruct following ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")]) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ### Function calling ```py from mistral_common.protocol.instruct.tool_calls import Function, Tool from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.protocol.instruct.messages import UserMessage from mistral_common.protocol.instruct.request import ChatCompletionRequest tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tokenizer.model.v3") model = Transformer.from_folder(mistral_models_path) completion_request = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris?"), ], ) tokens = tokenizer.encode_chat_completion(completion_request).tokens out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` ## Generate with `transformers` If you want to use Hugging Face `transformers` to generate text, you can do something like this. ```py from transformers import pipeline messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3") chatbot(messages) ``` ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF
mradermacher
"2024-06-11T22:31:09Z"
1,317
0
transformers
[ "transformers", "gguf", "llama-factory", "en", "base_model:JCHAVEROT/Qwen2-0.5B-Chat_SFT_DPO", "endpoints_compatible", "region:us" ]
null
"2024-06-11T22:24:56Z"
--- base_model: JCHAVEROT/Qwen2-0.5B-Chat_SFT_DPO language: - en library_name: transformers quantized_by: mradermacher tags: - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/JCHAVEROT/Qwen2-0.5B-Chat_SFT_DPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.IQ3_S.gguf) | IQ3_S | 0.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.IQ3_XS.gguf) | IQ3_XS | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.IQ3_M.gguf) | IQ3_M | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2-0.5B-Chat_SFT_DPO-GGUF/resolve/main/Qwen2-0.5B-Chat_SFT_DPO.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
LightningJay/L3-8B-Stheno-v3.2_Q8_0_gguf_and_exl2-bpw_8_bit_quantization
LightningJay
"2024-06-27T04:34:09Z"
1,317
0
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2024-06-27T03:45:32Z"
--- license: apache-2.0 ---
kyujinpy/KO-Platypus2-7B-ex
kyujinpy
"2023-10-19T13:27:22Z"
1,316
23
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "ko", "dataset:kyujinpy/KOpen-platypus", "arxiv:2308.07317", "arxiv:2307.09288", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-31T18:25:00Z"
--- language: - en - ko datasets: - kyujinpy/KOpen-platypus library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **Ko-Platypus2-7B-EX** **More detail repo(Github): [KO-Platypus](https://github.com/Marker-Inc-Korea/KO-Platypus)** ![KO-Platypus2-13B](./KO_platypus.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** KO-Platypus2-7B-ex is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) **Training Dataset** I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus). It is high-quality korean translation dataset about [open-platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). I use A100 GPU 40GB and COLAB, when trianing. **Vocab Expansion** | Model Name | Vocabulary Size | Description | | --- | --- | --- | | Original Platypus2 | 32000 | Sentencepiece BPE | | **Expanded KO-Platypus-ex** | 46336 | Sentencepiece BPE. Added Korean vocab and merges | **Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."** | Model | Tokens | | --- | --- | | Platypus2-7b | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '<0xEB>', '<0x82>', '<0xA0>', '씨', '가', '▁', '<0xEC>', '<0xA2>', '<0x8B>', '<0xEB>', '<0x84>', '<0xA4>', '요', '.']` | | KO-Platypus2-7b-ex | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.']` | **Tokenizing "Platypus: Quick, Cheap, and Powerful Refinement of LLMs"** | Model | Tokens | | --- | --- | | Platypus2-7b | `['▁Plat', 'yp', 'us', ':', '▁Quick', ',', '▁Che', 'ap', ',', '▁and', '▁Power', 'ful', '▁Re', 'fin', 'ement', '▁of', '▁L', 'LM', 's']` | | KO-Platypus2-7b-ex | `[▁Plat', 'yp', 'us', ':', '▁Quick', ',', '▁Che', 'ap', ',', '▁and', '▁Power', 'ful', '▁Re', 'fin', 'ement', '▁of', '▁L', 'LM', 's']` | # **Model Benchmark** ## LM Eval Harness - Korean (polyglot branch) - Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) > Question Answering (QA) ### COPA (F1) ![jpg](./results/copa.jpg) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 | | [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.5594 | 0.5913 | 0.5863 | 0.5916 | | **KO-platypus2-7B-EX(ours)** | 0.7509 | 0.7899 | 0.8029 | 0.8290 | *Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B > Natural Language Inference (NLI; 자연어 추론 평가) ### HellaSwag (F1) ![jpg](./results/hella.jpg) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 | | [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.4097 | 0.4258 | 0.4358 | 0.4271 | | **KO-platypus2-7B-EX(ours)** | 0.4571 | 0.4461 | 0.4371 | 0.4525 | *Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B > Question Answering (QA) ### BoolQ (F1) ![jpg](./results/bool.jpg) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 | | [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.3419 | 0.6024 | 0.5630 | 0.5461 | | **KO-platypus2-7B-EX(ours)** | 0.6028 | 0.6979 | 0.7016 | 0.6988 | *Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B > Classification ### SentiNeg (F1) ![jpg](./results/senti.jpg) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 | | [*Platypus2-7B](https://huggingface.co/garage-bAInd/Platypus2-7B) | 0.4098 | 0.7388 | 0.7558 | 0.8129 | | **KO-platypus2-7B-EX(ours)** | 0.5821 | 0.7653 | 0.7991 | 0.8643 | *Platypus2-7B Original used https://huggingface.co/garage-bAInd/Platypus2-7B # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KO-Platypus2-7B-ex" ko_platypus = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) ko_platypus_tokenizer = AutoTokenizer.from_pretrained(repo) ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) --- > Below is the original model card of the Platypus2-13B model. # Platypus2-13B Platypus-13B is an instruction fine-tuned model based on the LLaMA2-13B transformer architecture. ![Platty](./Best_Platty_small.jpeg) ### Benchmark Metrics | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 56.70 | | ARC (25-shot) | 61.26 | | HellaSwag (10-shot) | 82.56 | | TruthfulQA (0-shot) | 44.86 | | Avg. | 61.35 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Cole Hunter & Ariel Lee * **Model type:** **Platypus2-13B** is an auto-regressive language model based on the LLaMA2 transformer architecture. * **Language(s)**: English * **License for base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) ### Prompt Template ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ### Training Dataset `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. ### Training Procedure `garage-bAInd/Platypus2-13B` was instruction fine-tuned using LoRA on 1 A100 80GB. For training details and inference instructions please see the [Platypus2](https://github.com/arielnlee/Platypus) GitHub repo. ### Reproducing Evaluation Results Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # change to repo directory cd lm-evaluation-harness # install pip install -e . ``` Each task was evaluated on 1 A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ### Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ### Citations ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, } ``` ```bibtex @inproceedings{ hu2022lora, title={Lo{RA}: Low-Rank Adaptation of Large Language Models}, author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=nZeVKeeFYf9} } ```
kyujinpy/CoT-llama-2k-7b
kyujinpy
"2023-10-19T13:28:07Z"
1,316
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KoCoT_2000", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-23T19:02:28Z"
--- language: - ko datasets: - kyujinpy/KoCoT_2000 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **CoT-llama2-7B** ![img](./CoT-llama.png) **More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2)** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** CoT-llama2 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) **Training Dataset** I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000). Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection). I use A100 GPU 40GB and COLAB, when trianing. **Training Hyperparameters** | Hyperparameters | Value | | --- | --- | | batch_size | `64` | | micro_batch_size | `1` | | Epochs | `15` | | learning_rate | `1e-5` | | cutoff_len | `2048` | | lr_scheduler | `linear` | | base_model | `beomi/llama-2-ko-7b` | # **Model Benchmark** ## LM Eval Harness - Korean (polyglot branch) - Used EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) > Question Answering (QA) ### COPA (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.7196 | 0.7193 | 0.7204 | 0.7206 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.7595 | 0.7608 | 0.7638 | 0.7788 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.7745 | 0.7676 | 0.7775 | 0.7887 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.7937 | 0.8108 | 0.8037 | 0.8369 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7388 | 0.7626 | 0.7808 | 0.7979 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.7436 | 0.7927 | 0.8037 | 0.8259 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.7509 | 0.7899 | 0.8029 | 0.8290 | | **CoT-llama2-7B(ours)** | 0.7528 | 0.7888 | 0.7998 | 0.8210 | > Natural Language Inference (NLI; 자연어 추론 평가) ### HellaSwag (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.5247 | 0.5260 | 0.5278 | 0.5427 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.5707 | 0.5830 | 0.5670 | 0.5787 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.5976 | 0.5998 | 0.5979 | 0.6208 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.5954 | 0.6306 | 0.6098 | 0.6118 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4518 | 0.4668 | 0.4726 | 0.4828 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4562 | 0.4657 | 0.4698 | 0.4774 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.4571 | 0.4461 | 0.4371 | 0.4525 | | **CoT-llama2-7B(ours)** | 0.4543 | 0.4554 | 0.4606 | 0.4579 | > Question Answering (QA) ### BoolQ (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.3552 | 0.4751 | 0.4109 | 0.4038 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4320 | 0.5263 | 0.4930 | 0.4038 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.4356 | 0.5698 | 0.5187 | 0.5236 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.4818 | 0.6041 | 0.6289 | 0.6448 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.3607 | 0.6797 | 0.6801 | 0.6622 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.5786 | 0.6977 | 0.7084 | 0.7144 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.6028 | 0.6979 | 0.7016 | 0.6988 | | **CoT-llama2-7B(ours)** | 0.5852 | 0.6947 | 0.7059 | 0.7213 | > Classification ### SentiNeg (F1) | Model | 0-shot | 5-shot | 10-shot | 50-shot | | --- | --- | --- | --- | --- | | [Polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 0.6790 | 0.6257 | 0.5514 | 0.7851 | | [Polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 0.4858 | 0.7950 | 0.7320 | 0.7851 | | [Polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 0.3394 | 0.8841 | 0.8808 | 0.9521 | | [Polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) | 0.9117 | 0.9015 | 0.9345 | 0.9723 | | [Llama-2-Ko-7b 20B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4855 | 0.8295 | 0.8711 | 0.8513 | | [Llama-2-Ko-7b 40B](https://huggingface.co/beomi/llama-2-ko-7b) | 0.4594 | 0.7611 | 0.7276 | 0.9370 | | [KO-platypus2-7B-EX](https://huggingface.co/kyujinpy/KO-Platypus2-7B-ex) | 0.5821 | 0.7653 | 0.7991 | 0.8643 | | **CoT-llama2-7B(ours)** | 0.5045 | 0.8054 | 0.7942 | 0.9446 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/CoT-llama-2k-7b" CoT-llama = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo) ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
kyujinpy/KoT-platypus2-13B
kyujinpy
"2023-10-19T13:29:36Z"
1,316
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KoCoT_2000", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-05T18:16:45Z"
--- language: - ko datasets: - kyujinpy/KoCoT_2000 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **KoT-platypus2** ![img](./KoT-platypus2.png) **CoT + KO-platypus2 = KoT-platypus2** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** KoT-platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github KoT-platypus: [KoT-platypus2](https://github.com/KyujinHan/KoT-platypus) **Base Model** [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) More detail repo(Github): [CoT-llama2](https://github.com/Marker-Inc-Korea/CoT-llama2) More detail repo(Github): [KO-Platypus2](https://github.com/Marker-Inc-Korea/KO-Platypus) **Training Dataset** I use [KoCoT_2000](https://huggingface.co/datasets/kyujinpy/KoCoT_2000). Using DeepL, translate about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection). I use A100 GPU 40GB and COLAB, when trianing. **Training Hyperparameters** | Hyperparameters | Value | | --- | --- | | batch_size | `64` | | micro_batch_size | `1` | | Epochs | `15` | | learning_rate | `1e-5` | | cutoff_len | `4096` | | lr_scheduler | `linear` | | base_model | `kyujinpy/KO-Platypus2-13B` | # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). ![img](./leaderboard.png) | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | |KoT-Platypus2-13B(ours) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | | [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 | | [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 | | [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 | | [momo/polyglot-ko-12.8b-Chat-QLoRA-Merge](https://huggingface.co/momo/polyglot-ko-12.8b-Chat-QLoRA-Merge) | 45.71 | 35.49 | 49.93 | 25.97 | 39.43 | 77.70 | > Compare with Top 4 SOTA models. (update: 10/07) # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KoT-platypus2-13B" CoT-llama = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo) ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
nayohan/polyglot-ko-5.8b-Inst
nayohan
"2023-10-26T10:37:12Z"
1,316
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "polyglot-ko", "gpt-neox", "KoQuality", "ko", "dataset:DILAB-HYU/KoQuality", "base_model:EleutherAI/polyglot-ko-5.8b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-11T18:17:48Z"
--- language: - ko license: apache-2.0 tags: - generated_from_trainer - polyglot-ko - gpt-neox - KoQuality datasets: - DILAB-HYU/KoQuality pipeline_tag: text-generation base_model: EleutherAI/polyglot-ko-5.8b model-index: - name: KoAlpaca-Polyglot-5.8B results: [] --- This model is a test version that was learned by integrating several Instruction datasets. The final version can be found at [DILAB-HYU/KoQuality-Polyglot-5.8b](https://huggingface.co/DILAB-HYU/KoQuality-Polyglot-5.8b). ## Training hyperparameters - learning_rate: 5e-5 - train_batch_size: 2 - seed: 42 - distributed_type: multi-GPU (A30 24G) + Cpu Offloading - num_devices: 2 - gradient_accumulation_steps: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ## Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.11.0 - deepspeed 0.9.5
maywell/Synatra_TbST11B_EP01
maywell
"2023-10-18T12:27:36Z"
1,316
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-18T07:07:18Z"
--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- # **Synatra_TbST11B_EP01** Made by StableFluffy **Contact (Do not Contact for personal things.)** Discord : is.maywell Telegram : AlzarTakkarsen ## License This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **MISTRAL APACHE 2.0**. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. ## Model Details **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) **Trained On** A100 80GB * 4 # **Model Benchmark** X ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
cepiloth/ko-llama2-finetune-ex3
cepiloth
"2023-11-01T07:17:40Z"
1,316
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-26T21:30:44Z"
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " --- # Model Trained Using AutoTrain # License Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT This model was created as a personal experiment, unrelated to the organization I work for.
MNC-Jihun/Mistral-7B-OP-u1k-ver0.7
MNC-Jihun
"2023-10-31T06:30:52Z"
1,316
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-31T00:47:47Z"
Entry not found
Junmai/KIT-7B-v3
Junmai
"2023-11-09T02:06:42Z"
1,316
0
transformers
[ "transformers", "pytorch", "llama", "feature-extraction", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
"2023-11-09T01:21:32Z"
Entry not found
kyujinpy/KOR-Orca-Platypus-13B-v3
kyujinpy
"2023-11-12T18:19:59Z"
1,316
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-12T12:23:16Z"
--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **🐳KOR-Orca-Platypus-13B🐳** ![img](./Korean-OpenOrca.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca) **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I use [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). (with NEFTune.) I use A100 GPU 40GB and COLAB, when trianing. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | [KOR-Orca-Platypus-13B🐳] | 46.59 | 42.06 | 53.95 | 42.28 | 43.55 | 51.12 | | **KOR-Orca-Platypus-13B🐳-v2** | 49.48 | 44.03 | 54.43 | 42.23 | 41.64 | 65.05 | | KOR-Orca-Platypus-13B🐳-v3 | 48.37 | 43.77 | 54.27 | 42.66 | 38.58 | 62.57 | > Compare with Top 4 SOTA models. (update: 10/09) # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KOR-Orca-Platypus-13B-v3" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
chargoddard/Yi-6B-Llama
chargoddard
"2023-11-14T01:22:27Z"
1,316
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-14T01:12:56Z"
Entry not found
shleeeee/mistral-ko-7b-wiki-neft
shleeeee
"2024-03-08T00:11:04Z"
1,316
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "finetune", "ko", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-29T04:46:44Z"
--- language: - ko pipeline_tag: text-generation tags: - finetune --- # Model Card for mistral-ko-7b-wiki-neft It is a fine-tuned model using Korean and NEFT in the mistral-7b model. ## Model Details * **Model Developers** : shleeeee(Seunghyeon Lee), oopsung(Sungwoo Park) * **Repository** : To be added * **Model Architecture** : The mistral-ko-7b-wiki-neft is is a fine-tuned version of the Mistral-7B-v0.1. * **Lora target modules** : q_proj, k_proj, v_proj, o_proj,gate_proj * **train_batch** : 4 * **neftune_noise_alpha** : 5 * **Max_step** : 1000 ## Dataset Korean Custom Dataset ## Prompt template: Mistral ``` <s>[INST]{['instruction']}[/INST]{['output']}</s> ``` ## Usage ``` # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("shleeeee/mistral-7b-wiki") model = AutoModelForCausalLM.from_pretrained("shleeeee/mistral-7b-wiki") # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="shleeeee/mistral-7b-wiki") ``` ## Evaluation ![image/png](https://cdn-uploads.huggingface.co/production/uploads/654495fa893aec5da96e9134/p1aJ4YMdP_E9YzhTcuaFx.png)
inswave/AISquare-Instruct-llama2-koen-13b-v0.9.3
inswave
"2023-11-30T11:33:23Z"
1,316
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-30T11:08:57Z"
Entry not found
inswave/AISquare-Instruct-llama2-koen-13b-v0.9.12
inswave
"2023-12-01T23:44:29Z"
1,316
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-01T23:28:04Z"
Entry not found
Minirecord/Mini_llama13b_test123
Minirecord
"2023-12-12T09:38:21Z"
1,316
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-12T09:32:31Z"
--- license: apache-2.0 ---
Minirecord/minyi_5k_6B
Minirecord
"2023-12-27T06:44:09Z"
1,316
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-27T06:40:23Z"
--- license: apache-2.0 ---
M4-ai/TinyMistral-6x248M
M4-ai
"2024-01-30T23:17:34Z"
1,316
9
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "Locutusque/TinyMistral-248M-v2", "Locutusque/TinyMistral-248M-v2.5", "Locutusque/TinyMistral-248M-v2.5-Instruct", "jtatman/tinymistral-v2-pycoder-instruct-248m", "Felladrin/TinyMistral-248M-SFT-v4", "Locutusque/TinyMistral-248M-v2-Instruct", "dataset:nampdn-ai/mini-peS2o", "base_model:Locutusque/TinyMistral-248M-v2", "base_model:Locutusque/TinyMistral-248M-v2.5", "base_model:Locutusque/TinyMistral-248M-v2.5-Instruct", "base_model:jtatman/tinymistral-v2-pycoder-instruct-248m", "base_model:Felladrin/TinyMistral-248M-SFT-v4", "base_model:Locutusque/TinyMistral-248M-v2-Instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-28T01:39:05Z"
--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - Locutusque/TinyMistral-248M-v2 - Locutusque/TinyMistral-248M-v2.5 - Locutusque/TinyMistral-248M-v2.5-Instruct - jtatman/tinymistral-v2-pycoder-instruct-248m - Felladrin/TinyMistral-248M-SFT-v4 - Locutusque/TinyMistral-248M-v2-Instruct base_model: - Locutusque/TinyMistral-248M-v2 - Locutusque/TinyMistral-248M-v2.5 - Locutusque/TinyMistral-248M-v2.5-Instruct - jtatman/tinymistral-v2-pycoder-instruct-248m - Felladrin/TinyMistral-248M-SFT-v4 - Locutusque/TinyMistral-248M-v2-Instruct inference: parameters: do_sample: true temperature: 0.2 top_p: 0.14 top_k: 12 max_new_tokens: 250 repetition_penalty: 1.15 widget: - text: | <|im_start|>user Write me a Python program that calculates the factorial of n. <|im_end|> <|im_start|>assistant - text: >- An emerging clinical approach to treat substance abuse disorders involves a form of cognitive-behavioral therapy whereby addicts learn to reduce their reactivity to drug-paired stimuli through cue-exposure or extinction training. It is, however, datasets: - nampdn-ai/mini-peS2o --- # TinyMistral-6x248M TinyMistral-6x248M is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Locutusque/TinyMistral-248M-v2](https://huggingface.co/Locutusque/TinyMistral-248M-v2) * [Locutusque/TinyMistral-248M-v2.5](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5) * [Locutusque/TinyMistral-248M-v2.5-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2.5-Instruct) * [jtatman/tinymistral-v2-pycoder-instruct-248m](https://huggingface.co/jtatman/tinymistral-v2-pycoder-instruct-248m) * [Felladrin/TinyMistral-248M-SFT-v4](https://huggingface.co/Felladrin/TinyMistral-248M-SFT-v4) * [Locutusque/TinyMistral-248M-v2-Instruct](https://huggingface.co/Locutusque/TinyMistral-248M-v2-Instruct) The resulting model is then pre-trained on 600,000 examples of nampdn-ai/mini-peS2o. We don't recommend using the Inference API as the model has serious performance degradation. ### Recommended inference parameters ``` do_sample: true temperature: 0.2 top_p: 0.14 top_k: 12 repetition_penalty: 1.15 ``` ## 🧩 Configuration ```yaml base_model: Locutusque/TinyMistral-248M-v2.5 experts: - source_model: Locutusque/TinyMistral-248M-v2 positive_prompts: - "An emerging trend in global economics is" - "TITLE: The Next Generation of Internet Connectivity" - "begin a comprehensive analysis on the sociopolitical effects of" negative_prompts: - "Code a simple" - "Explain the Krebs cycle in detail" - "Compose a sonnet about" - source_model: Locutusque/TinyMistral-248M-v2.5 positive_prompts: - "Advanced C++ memory management techniques" - "C# asynchronous programming best practices" - "AI's role in predictive analytics" - "textbook review on machine learning algorithms" - "## Exercise: Design a C# interface for a CRM system" - "## Solution: Optimize an AI-powered recommendation engine" negative_prompts: - "Narrate the story of" - "The ethical considerations in" - "Review the latest art exhibition by" - source_model: Locutusque/TinyMistral-248M-v2.5-Instruct positive_prompts: - "What is the chemical formula for photosynthesis?" - "Identification of a new mineral found on Mars" - "physics: Explaining the concept of relativity" - "Solve for x using differential equations:" - "history: Analyze the causes of the French Revolution" negative_prompts: - "Devise a business plan for" - "The evolution of culinary arts" - "Orchestrate a piece for a string quartet" - source_model: jtatman/tinymistral-v2-pycoder-instruct-248m positive_prompts: - "Write a Python program for facial recognition" - "Explain dynamic typing in programming languages" - "algorithm development for efficient data sorting" negative_prompts: - "Who was the first Emperor of Rome?" - "Discuss the political dynamics in" - "Provide a proof for Fermat's Last Theorem" - "physics: The principles of thermodynamics" - source_model: Felladrin/TinyMistral-248M-SFT-v4 positive_prompts: - "Escreba sobre a influência da música no Brasil" - "Voici un guide pour les voyageurs en France" - "Para entender la política de México, se debe considerar" - "Cuales son los efectos de la globalización en Argentina" - "Welche gesellschaftlichen Veränderungen gibt es in Deutschland" - "If you had to imagine a utopian city, what would be its core values?" negative_prompts: - "Calculate the integral of" - "Describe the process of cell division" - "Review the latest advancements in quantum computing" - source_model: Locutusque/TinyMistral-248M-v2-Instruct positive_prompts: - "Write an essay on the evolution of international trade laws" - "What are the key components of a sustainable urban ecosystem?" - "instruct on effective negotiation techniques in diplomacy" - "How does cognitive bias affect decision making in high-pressure environments?" - "Identify the architectural significance of the Sydney Opera House" negative_prompts: - "Develop a script to automate" - "Understanding inheritance in object-oriented programming" - "philosophy of existentialism in contemporary society" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "M4-ai/TinyMistral-6x248M" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
ilsilfverskiold/traffic-levels-image-classification
ilsilfverskiold
"2024-05-06T07:54:05Z"
1,316
2
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-05-05T17:10:10Z"
--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: vit-base-patch16-224-finetuned-traffic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Traffic level image classification This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4394 - Accuracy: 0.8292 - Precision: 0.8232 - Recall: 0.7366 - F1: 0.7721 ## Model description Built from 6,000 images fetched from public traffic cameras in Norway to classify traffic levels from low, medium to high. Dataset is unbalanced skewed towards low traffic images. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.6282 | 0.9843 | 47 | 0.5725 | 0.7644 | 0.7933 | 0.5918 | 0.6525 | | 0.4486 | 1.9895 | 95 | 0.4630 | 0.8012 | 0.7964 | 0.6824 | 0.7213 | | 0.3285 | 2.9948 | 143 | 0.4394 | 0.8292 | 0.8232 | 0.7366 | 0.7721 | | 0.2391 | 4.0 | 191 | 0.4302 | 0.8115 | 0.7941 | 0.7333 | 0.7555 | | 0.1814 | 4.9215 | 235 | 0.4365 | 0.8218 | 0.7993 | 0.7362 | 0.7631 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
QuantFactory/Llama-3-8B-Magpie-Pro-SFT-v0.1-GGUF
QuantFactory
"2024-06-19T11:52:45Z"
1,316
2
null
[ "gguf", "axolotl", "generated_from_trainer", "text-generation", "arxiv:2406.08464", "base_model:Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1", "license:llama3", "region:us" ]
text-generation
"2024-06-19T05:30:49Z"
--- license: llama3 base_model: Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1 tags: - axolotl - generated_from_trainer model-index: - name: Llama-3-8B-Magpie-Pro-SFT-v0.1 results: [] pipeline_tag: text-generation --- # 🐦 Llama-3-8B-Magpie-Pro-SFT-v0.1-GGUF This is quantized version of [Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1](https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-SFT-v0.1) created using llama.cpp # Model Description Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/) Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464) Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie) ## Abstract <details><summary>Click Here</summary> High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench. </details><be> ## About This Model This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [Magpie-Align/Magpie-Pro-300K-Filtered](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) dataset. It achieves performance comparable with the official Llama-3-8B-Instruct Model with SFT only! - **Alpaca Eval 2 (GPT-4-Turbo-1106): 25.08 (LC), 29.47 (WR)** - **Alpaca Eval 2 (Llama-3-8B-Instruct): 52.12 (LC), 53.43 (WR)** - **Arena Hard: 18.9** ## Other Information **License**: Please follow [Meta Llama 3 Community License](https://llama.meta.com/llama3/license). **Conversation Template**: Please use Llama 3 **official chat template** for the best performance. ## Original Model Citation If you find the model, data, or code useful, please cite our paper: ``` @misc{xu2024magpie, title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing}, author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin}, year={2024}, eprint={2406.08464}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.8664 | 0.0012 | 1 | 0.8860 | | 0.4038 | 0.9989 | 825 | 0.4250 | | 0.327 | 1.9830 | 1650 | 0.4219 | ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1 [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: meta-llama/Meta-Llama-3-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false datasets: - path: Magpie-Align/Magpie-Pro-300K-Filtered type: sharegpt conversation: llama3 dataset_prepared_path: last_run_prepared val_set_size: 0.001 output_dir: ./out_Llama-3-8B-Magpie-Pro-300K-FilteredL sequence_len: 8192 sample_packing: true eval_sample_packing: false pad_to_sequence_len: true gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 2 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 1 eval_table_size: saves_per_epoch: 3 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br>
timm/tf_efficientnet_lite4.in1k
timm
"2023-04-27T21:38:39Z"
1,315
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:1905.11946", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:14:01Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for tf_efficientnet_lite4.in1k A EfficientNet-Lite image classification model. Trained on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 13.0 - GMACs: 4.0 - Activations (M): 45.7 - Image size: 380 x 380 - **Papers:** - EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks: https://arxiv.org/abs/1905.11946 - **Dataset:** ImageNet-1k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnet_lite4.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_lite4.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 24, 190, 190]) # torch.Size([1, 32, 95, 95]) # torch.Size([1, 56, 48, 48]) # torch.Size([1, 160, 24, 24]) # torch.Size([1, 448, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnet_lite4.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2019efficientnet, title={Efficientnet: Rethinking model scaling for convolutional neural networks}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={6105--6114}, year={2019}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
facebook/mms-tts-orm
facebook
"2023-09-01T13:22:36Z"
1,315
1
transformers
[ "transformers", "pytorch", "safetensors", "vits", "text-to-audio", "mms", "text-to-speech", "arxiv:2305.13516", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
text-to-speech
"2023-09-01T13:22:08Z"
--- license: cc-by-nc-4.0 tags: - mms - vits pipeline_tag: text-to-speech --- # Massively Multilingual Speech (MMS): Oromo Text-to-Speech This repository contains the **Oromo (orm)** language text-to-speech (TTS) model checkpoint. This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to provide speech technology across a diverse range of languages. You can find more details about the supported languages and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html), and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts). MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. ## Model Details VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to synthesise speech with different rhythms from the same input text. The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. For the MMS project, a separate VITS checkpoint is trained on each langauge. ## Usage MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, first install the latest version of the library: ``` pip install --upgrade transformers accelerate ``` Then, run inference with the following code-snippet: ```python from transformers import VitsModel, AutoTokenizer import torch model = VitsModel.from_pretrained("facebook/mms-tts-orm") tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-orm") text = "some example text in the Oromo language" inputs = tokenizer(text, return_tensors="pt") with torch.no_grad(): output = model(**inputs).waveform ``` The resulting waveform can be saved as a `.wav` file: ```python import scipy scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) ``` Or displayed in a Jupyter Notebook / Google Colab: ```python from IPython.display import Audio Audio(output, rate=model.config.sampling_rate) ``` ## BibTex citation This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper: ``` @article{pratap2023mms, title={Scaling Speech Technology to 1,000+ Languages}, author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli}, journal={arXiv}, year={2023} } ``` ## License The model is licensed as **CC-BY-NC 4.0**.
fiveflow/kolong-llama-v0.1
fiveflow
"2023-10-10T02:26:27Z"
1,315
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-09T15:58:24Z"
Entry not found
maywell/Synatra_TbST02M_IN01
maywell
"2023-10-18T10:43:21Z"
1,315
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-16T09:09:13Z"
--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- # **Synatra_TbST02M_IN01** Made by StableFluffy **Contact (Do not Contact for personal things.)** Discord : is.maywell Telegram : AlzarTakkarsen ## License This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **MISTRAL APACHE 2.0**. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. ## Model Details **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) **Trained On** A100 80GB * 4 # **Model Benchmark** X ``` > Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) ---
nakhyeonn/llama-2-ko-qlora-prompt
nakhyeonn
"2023-10-23T21:40:44Z"
1,315
0
transformers
[ "transformers", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-23T21:29:21Z"
Entry not found
nayohan/polyglot-ko-5.8b-Inst-All
nayohan
"2023-10-26T10:42:45Z"
1,315
2
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "polyglot-ko", "gpt-neox", "KoQuality", "ko", "dataset:DILAB-HYU/KoQuality", "base_model:EleutherAI/polyglot-ko-5.8b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-24T03:07:08Z"
--- license: apache-2.0 datasets: - DILAB-HYU/KoQuality language: - ko pipeline_tag: text-generation tags: - polyglot-ko - gpt-neox - KoQuality base_model: EleutherAI/polyglot-ko-5.8b --- This model is a instruct-tuned poylglot-ko-5.8b model, using full [Kullm, OIG, KoAlpaca] Instruction dataset. koquality_raw.json -> 410step ## Training hyperparameters - learning_rate: 5e-5 - train_batch_size: 2 - seed: 42 - distributed_type: multi-GPU (A30 24G) + CPU Offloading - num_devices: 2 - gradient_accumulation_steps: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 ## Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.11.0 - deepspeed 0.9.5 -
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.2.3
krevas
"2023-10-28T05:39:39Z"
1,315
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-28T05:12:21Z"
--- license: cc-by-nc-4.0 ---
hyeogi/open-llama2-7b-v0.1
hyeogi
"2023-12-15T22:54:56Z"
1,315
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-15T23:24:19Z"
Entry not found
inswave/AISquare-Instruct-llama2-koen-13b-v0.9.25
inswave
"2023-12-19T01:58:16Z"
1,315
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-19T01:52:14Z"
Entry not found
genne/eclectus_1.1_dedup
genne
"2023-12-27T23:55:59Z"
1,315
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-27T23:52:00Z"
Entry not found
jeonsworld/CarbonVillain-10.7B-v1
jeonsworld
"2024-01-02T11:09:55Z"
1,315
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-31T06:34:32Z"
--- license: apache-2.0 language: - ko --- # CarbonVillain **This is a model created without learning to oppose indiscriminate carbon emissions.** This model is an experimental version created using [mergekit](https://github.com/cg123/mergekit). - merge models - Megastudy/M-SOLAR-10.7B-v1.1-beta - jjourney1125/M-SOLAR-10.7B-v1.0 - method: slerp
hfl/llama-3-chinese-8b-gguf
hfl
"2024-04-30T03:58:35Z"
1,315
6
null
[ "gguf", "zh", "en", "license:apache-2.0", "region:us" ]
null
"2024-04-22T06:26:41Z"
--- license: apache-2.0 language: - zh - en --- # Llama-3-Chinese-8B-GGUF <p align="center"> <a href="https://github.com/ymcui/Chinese-LLaMA-Alpaca-3"><img src="https://ymcui.com/images/chinese-llama-alpaca-3-banner.png" width="600"/></a> </p> This repository contains **Llama-3-Chinese-8B-GGUF** (llama.cpp/ollama/tgw, etc. compatible), which is the quantized version of [Llama-3-Chinese-8B](https://huggingface.co/hfl/llama-3-chinese-8b). **Note: this is a foundation model, which is not suitable for conversation, QA, etc.** Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3 ## Performance Metric: PPL, **lower is better** *Note: Old models have been removed due to its inferior performance.* | Quant | Size | PPL (old model) | 👍🏻 PPL (new model) | | :---: | -------: | ------------------: | ------------------: | | Q2_K | 2.96 GB | 17.7212 +/- 0.59814 | 11.8595 +/- 0.20061 | | Q3_K | 3.74 GB | 8.6303 +/- 0.28481 | 5.7559 +/- 0.09152 | | Q4_0 | 4.34 GB | 8.2513 +/- 0.27102 | 5.5495 +/- 0.08832 | | Q4_K | 4.58 GB | 7.8897 +/- 0.25830 | 5.3126 +/- 0.08500 | | Q5_0 | 5.21 GB | 7.7975 +/- 0.25639 | 5.2222 +/- 0.08317 | | Q5_K | 5.34 GB | 7.7062 +/- 0.25218 | 5.1813 +/- 0.08264 | | Q6_K | 6.14 GB | 7.6600 +/- 0.25043 | 5.1481 +/- 0.08205 | | Q8_0 | 7.95 GB | 7.6512 +/- 0.25064 | 5.1350 +/- 0.08190 | | F16 | 14.97 GB | 7.6389 +/- 0.25001 | 5.1302 +/- 0.08184 | ## Others - For full model, please see: https://huggingface.co/hfl/llama-3-chinese-8b - For LoRA-only model, please see: https://huggingface.co/hfl/llama-3-chinese-8b-lora - If you have questions/issues regarding this model, please submit an issue through https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
domie/Qwen2-1.5B-Ita
domie
"2024-06-21T20:48:07Z"
1,315
0
transformers
[ "transformers", "gguf", "qwen2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
"2024-06-21T20:10:00Z"
Entry not found
flax-sentence-embeddings/reddit_single-context_mpnet-base
flax-sentence-embeddings
"2021-07-26T01:36:18Z"
1,314
3
sentence-transformers
[ "sentence-transformers", "pytorch", "mpnet", "feature-extraction", "sentence-similarity", "en", "arxiv:1904.06472", "autotrain_compatible", "endpoints_compatible", "region:us" ]
sentence-similarity
"2022-03-02T23:29:05Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity language: en --- # Model description The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base) model and fine-tuned in on a 700M sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developped this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developped this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. ## How to use Here is how to use this model to get the features of a given text using [SentenceTransformers](https://github.com/UKPLab/sentence-transformers) library: ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/reddit_single-context_mpnet-base') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32) ``` # Training procedure ## Pre-training We use the pretrained ['mpnet-base'](https://huggingface.co/microsoft/mpnet-base). Please refer to the model card for more detailed information about the pre-training procedure. ## Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. ### Hyper parameters We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository. ### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 700M sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. We only use the first context response when building the dataset. | Dataset | Paper | Number of training tuples | |:--------------------------------------------------------:|:----------------------------------------:|:--------------------------:| | [Reddit conversationnal](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 |
kyujinpy/Korean-OpenOrca-13B
kyujinpy
"2023-10-19T13:30:00Z"
1,314
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/OpenOrca-KO", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-08T19:07:11Z"
--- language: - ko datasets: - kyujinpy/OpenOrca-KO library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **🐳Korean-OpenOrca-13B🐳** ![img](./Korean-OpenOrca.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca) **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I use [OpenOrca-KO](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO). Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). I use A100 GPU 40GB and COLAB, when trianing. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | Korean-OpenOrca-13B(ours🐳) | 47.85 | 43.09 | 54.13 | 40.24 | 45.22 | 56.57 | | [KoT-Platypus2-13B](https://huggingface.co/kyujinpy/KoT-platypus2-13B) | 49.55 | 43.69 | 53.05 | 42.29 | 43.34 | 65.38 | | [KO-Platypus2-13B](https://huggingface.co/kyujinpy/KO-Platypus2-13B) | 47.90 | 44.20 | 54.31 | 42.47 | 44.41 | 54.11 | | [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) | 46.68 | 42.15 | 54.23 | 38.90 | 40.74 | 57.39 | | [MarkrAI/kyujin-CoTy-platypus-ko-12.8b](https://huggingface.co/MarkrAI/kyujin-CoTy-platypus-ko-12.8b) | 46.44 | 34.98 | 49.11 | 25.68 | 37.59 | 84.86 | > Compare with Top 4 SOTA models. (update: 10/09) # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Korean-OpenOrca-13B" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
MNCJ1hun/MIstral-11B-Omni-OP-u1k-ver0.1
MNCJ1hun
"2023-10-29T13:39:48Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-29T00:19:14Z"
Entry not found
kyujinpy/Korean-OpenOrca-13B-v2
kyujinpy
"2023-11-01T14:13:02Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/OpenOrca-ko-v2", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-30T19:09:11Z"
--- language: - ko datasets: - kyujinpy/OpenOrca-ko-v2 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **🐳Korean-OpenOrca-13B-v2🐳** ![img](./Korean-OpenOrca.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Model Architecture** Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca) **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I use [OpenOrca-ko-v2](https://huggingface.co/datasets/kyujinpy/OpenOrca-ko-v2). Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca). I use A100 GPU 40GB and COLAB, when trianing. # Model comparisons | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | [Korean-OpenOrca-13B🐳] | 48.79 | 43.09 | 54.13 | 40.24 | 45.22 | 61.28 | | Korean-OpenOrca-13B-v2🐳 | 48.17 | 43.17 | 54.51 | 42.90 | 41.82 | 58.44 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/Korean-OpenOrca-13B-v2" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
DopeorNope/COKA-DPO-test-v1
DopeorNope
"2023-11-09T18:30:54Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-08T08:39:18Z"
Entry not found
kyujinpy/ko-platypus-kiwi-13B
kyujinpy
"2023-11-23T04:09:32Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-Orca-Platypus-kiwi", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-14T12:10:14Z"
--- language: - ko datasets: - kyujinpy/KOR-Orca-Platypus-kiwi library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **KOR-Orca-Platypus-kiwi🥝** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Model Architecture** ko-platypus-kiwi-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I used [kyujinpy/KOR-Orca-Platypus-kiwi](https://huggingface.co/datasets/kyujinpy/KOR-Orca-Platypus-kiwi). # Model comparisons | Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | **ko-platypus-kiwi-13B🥝** | 48.97 | 42.41 | 54.29 | 41.98 | 40.05 | **66.12** | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/ko-platypus-kiwi-13B" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
jiwoochris/llama2_cot-13b-v2
jiwoochris
"2023-11-15T06:00:20Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-15T05:52:19Z"
Entry not found
Herry443/Mistral-7B-KNUT-v0.3
Herry443
"2023-12-09T04:54:10Z"
1,314
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-09T04:29:07Z"
Entry not found
shleeeee/mistral-ko-tech-science-v1
shleeeee
"2024-03-08T00:18:13Z"
1,314
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ko", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-10T05:02:38Z"
--- license: other language: - ko pipeline_tag: text-generation --- # Model Card for mistral-ko-tech-science-v1 It is a fine-tuned model using Korean in the mistral-7b model ## Model Details * **Model Developers** : shleeeee(Seunghyeon Lee) , oopsung(Sungwoo Park)
inswave/AISquare-Instruct-yi-ko-6b-v0.9.16
inswave
"2023-12-12T06:26:40Z"
1,314
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-12T02:00:35Z"
Entry not found
GAI-LLM/Yi-Ko-6B-dpo-v3
GAI-LLM
"2023-12-19T14:19:00Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-19T13:59:21Z"
--- license: cc-by-4.0 ---
GAI-LLM/Yi-Ko-6B-dpo-v4
GAI-LLM
"2023-12-22T00:25:34Z"
1,314
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-22T00:14:03Z"
--- license: cc-by-nc-4.0 ---
ylacombe/musicgen-melody
ylacombe
"2024-02-06T12:41:49Z"
1,314
0
transformers
[ "transformers", "pytorch", "safetensors", "musicgen_melody", "text-to-audio", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
text-to-audio
"2024-01-25T17:00:05Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf
RichardErkhov
"2024-06-06T05:04:07Z"
1,314
1
null
[ "gguf", "region:us" ]
null
"2024-06-06T04:34:24Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) harry_potter_chatbot - GGUF - Model creator: https://huggingface.co/diabolic6045/ - Original model: https://huggingface.co/diabolic6045/harry_potter_chatbot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [harry_potter_chatbot.Q2_K.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q2_K.gguf) | Q2_K | 0.17GB | | [harry_potter_chatbot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.IQ3_XS.gguf) | IQ3_XS | 0.18GB | | [harry_potter_chatbot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.IQ3_S.gguf) | IQ3_S | 0.19GB | | [harry_potter_chatbot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q3_K_S.gguf) | Q3_K_S | 0.19GB | | [harry_potter_chatbot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.IQ3_M.gguf) | IQ3_M | 0.2GB | | [harry_potter_chatbot.Q3_K.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q3_K.gguf) | Q3_K | 0.21GB | | [harry_potter_chatbot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q3_K_M.gguf) | Q3_K_M | 0.21GB | | [harry_potter_chatbot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q3_K_L.gguf) | Q3_K_L | 0.23GB | | [harry_potter_chatbot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.IQ4_XS.gguf) | IQ4_XS | 0.22GB | | [harry_potter_chatbot.Q4_0.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q4_0.gguf) | Q4_0 | 0.23GB | | [harry_potter_chatbot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.IQ4_NL.gguf) | IQ4_NL | 0.23GB | | [harry_potter_chatbot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q4_K_S.gguf) | Q4_K_S | 0.23GB | | [harry_potter_chatbot.Q4_K.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q4_K.gguf) | Q4_K | 0.25GB | | [harry_potter_chatbot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [harry_potter_chatbot.Q4_1.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q4_1.gguf) | Q4_1 | 0.25GB | | [harry_potter_chatbot.Q5_0.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q5_0.gguf) | Q5_0 | 0.27GB | | [harry_potter_chatbot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q5_K_S.gguf) | Q5_K_S | 0.27GB | | [harry_potter_chatbot.Q5_K.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q5_K.gguf) | Q5_K | 0.29GB | | [harry_potter_chatbot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q5_K_M.gguf) | Q5_K_M | 0.29GB | | [harry_potter_chatbot.Q5_1.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q5_1.gguf) | Q5_1 | 0.29GB | | [harry_potter_chatbot.Q6_K.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q6_K.gguf) | Q6_K | 0.32GB | | [harry_potter_chatbot.Q8_0.gguf](https://huggingface.co/RichardErkhov/diabolic6045_-_harry_potter_chatbot-gguf/blob/main/harry_potter_chatbot.Q8_0.gguf) | Q8_0 | 0.41GB | Original model description: # Harry Potter Chatbot This model is a chatbot designed to generate responses in the style of Harry Potter, the protagonist from J.K. Rowling's popular book series and its movie adaptations. ## Model Architecture The `harry_potter_chatbot` is based on the [`DialoGPT-medium`](https://huggingface.co/microsoft/DialoGPT-medium) model, a powerful GPT-based architecture designed for generating conversational responses. It has been fine-tuned on a dataset of Harry Potter's dialogues from movie transcripts. ## Usage You can use this model to generate responses for a given input text using the following code: ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("diabolic6045/harry_potter_chatbot") model = AutoModelForCausalLM.from_pretrained("diabolic6045/harry_potter_chatbot") input_text = "What's your favorite spell?" input_tokens = tokenizer.encode(input_text, return_tensors='pt') output_tokens = model.generate(input_tokens, max_length=50, num_return_sequences=1) output_text = tokenizer.decode(output_tokens[0], skip_special_tokens=True) print(output_text) ``` ## Limitations This model is specifically designed to generate responses in the style of Harry Potter and may not provide accurate or coherent answers to general knowledge questions. It may also sometimes generate inappropriate responses. Be cautious while using this model in a public setting or for critical applications. ## Training Data The model was fine-tuned on a dataset of Harry Potter's dialogues from movie transcripts. The dataset was collected from publicly available movie scripts and includes conversations and quotes from various Harry Potter films. ## Acknowledgments This model was trained using the Hugging Face [Transformers](https://github.com/huggingface/transformers) library, and it is based on the [`DialoGPT-medium`](https://huggingface.co/microsoft/DialoGPT-medium) model by Microsoft. Special thanks to the Hugging Face team and Microsoft for their contributions to the NLP community. --- Feel free to test the model and provide feedback or report any issues. Enjoy chatting with Harry Potter!
Xrunner/comvis
Xrunner
"2024-06-21T00:36:34Z"
1,314
0
diffusers
[ "diffusers", "region:us" ]
null
"2024-06-21T00:27:21Z"
Entry not found
etri-xainlp/llama2-ko-13b-instruct-v1
etri-xainlp
"2023-10-30T03:35:20Z"
1,313
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-26T00:57:38Z"
--- license: apache-2.0 --- # llama2-ko-13b-instruct-v1 This model is a fine-tuned version of [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an instruction-following dataset(670k)
nakhyeonn/llama-2-ko-qlora-prompt_1024_new_2
nakhyeonn
"2023-10-28T12:20:59Z"
1,313
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-28T11:52:15Z"
Entry not found
HumanF-MarkrAI/pub-llama-13B-v4
HumanF-MarkrAI
"2023-11-02T22:47:51Z"
1,313
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-28T13:56:12Z"
Entry not found
maywell/Synatra-Zephyr-7B-v0.01
maywell
"2023-11-01T00:32:06Z"
1,313
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-01T00:16:50Z"
--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- # **This is VERY Ealry Model of Development!** 이 모델은 Synatra-Zephyr-7B의 극초기 버전입니다. # **Synatra-Zephyr-7B-v0.01🐧** ![Synatra-Zephyr-7B-v0.01](./Synatra.png) ## Support Me 시나트라는 개인 프로젝트로, 1인의 자원으로 개발되고 있습니다. 모델이 마음에 드셨다면 약간의 연구비 지원은 어떨까요? [<img src="https://cdn.buymeacoffee.com/buttons/default-orange.png" alt="Buy me a Coffee" width="217" height="50">](https://www.buymeacoffee.com/mwell) Wanna be a sponser? Contact me on Telegram **AlzarTakkarsen** # **License** This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me. # **Model Details** **Base Model** [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) **Trained On** A100 80G * 4 # **Model Benchmark** ## Ko-LLM-Leaderboard On Benchmarking... # **Implementation Code** Since, chat_template already contains insturction format above. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Zephyr-7B-v0.01") tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Zephyr-7B-v0.01") messages = [ {"role": "user", "content": "바나나는 원래 하얀색이야?"}, ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ```
cepiloth/ko-llama2-13b-finetune
cepiloth
"2023-11-01T08:54:19Z"
1,313
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-01T08:15:29Z"
--- tags: - autotrain - text-generation widget: - text: "I love AutoTrain because " --- # Model Trained Using AutoTrain
metterian/polyglot-ko-kullm-v2-fix
metterian
"2023-11-03T06:15:53Z"
1,313
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-03T05:28:19Z"
--- license: mit ---
maywell/ko_ocgn_ep1
maywell
"2023-11-12T23:35:28Z"
1,313
0
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-12T23:15:54Z"
--- license: cc-by-nc-4.0 ---
oopsung/Yi-Ko-6B-orcapus-test-v1
oopsung
"2023-12-06T05:27:12Z"
1,313
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-06T05:16:05Z"
Entry not found
DopeorNope/Yi_lee-v1-DPO-6B
DopeorNope
"2023-12-06T09:28:07Z"
1,313
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-06T09:13:58Z"
Entry not found
Minirecord/Merge_test01
Minirecord
"2023-12-08T06:15:10Z"
1,313
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-08T04:47:08Z"
--- license: apache-2.0 ---
LDCC/LDCC-Instruct-Llama-2-ko-13B-v1.7
LDCC
"2023-12-11T07:40:25Z"
1,313
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-11T07:35:33Z"
--- license: cc-by-nc-4.0 ---
DopeorNope/SOLAR_C-v1-10.7B
DopeorNope
"2023-12-28T05:30:35Z"
1,313
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-26T06:20:42Z"
Entry not found
genne/eclectus_7b_1.1
genne
"2023-12-26T23:42:23Z"
1,313
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-26T23:34:17Z"
Entry not found
ChrisWilson011016/5FCKxT34tYJ92bqQSg7EVSGR4UpvWEeHACYULFrTRkrumgGr_vgg
ChrisWilson011016
"2024-02-29T14:10:07Z"
1,313
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:07:40Z"
Entry not found
louaaron/sedd-small
louaaron
"2024-03-07T07:23:49Z"
1,313
2
transformers
[ "transformers", "pytorch", "arxiv:2310.16834", "endpoints_compatible", "region:us" ]
null
"2024-02-28T01:31:54Z"
Score Entropy Discrete Diffusion (SEDD) small model for use with inference code in https://github.com/louaaron/Score-Entropy-Discrete-Diffusion. Paper found at arxiv.org/abs/2310.16834
spacy/en_core_web_md
spacy
"2023-11-21T08:10:29Z"
1,312
6
spacy
[ "spacy", "token-classification", "en", "license:mit", "model-index", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - spacy - token-classification language: - en license: mit model-index: - name: en_core_web_md results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8494302632 - name: NER Recall type: recall value: 0.8549178686 - name: NER F Score type: f_score value: 0.8521652315 - task: name: TAG type: token-classification metrics: - name: TAG (XPOS) Accuracy type: accuracy value: 0.9732581964 - task: name: UNLABELED_DEPENDENCIES type: token-classification metrics: - name: Unlabeled Attachment Score (UAS) type: f_score value: 0.9205112068 - task: name: LABELED_DEPENDENCIES type: token-classification metrics: - name: Labeled Attachment Score (LAS) type: f_score value: 0.9022890411 - task: name: SENTS type: token-classification metrics: - name: Sentences F-Score type: f_score value: 0.9076778775 --- ### Details: https://spacy.io/models/en#en_core_web_md English pipeline optimized for CPU. Components: tok2vec, tagger, parser, senter, ner, attribute_ruler, lemmatizer. | Feature | Description | | --- | --- | | **Name** | `en_core_web_md` | | **Version** | `3.7.1` | | **spaCy** | `>=3.7.2,<3.8.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 514157 keys, 20000 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br />[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br />[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [Explosion](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (113 labels for 3 components)</summary> | Component | Labels | | --- | --- | | **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, `_SP`, ```` | | **`parser`** | `ROOT`, `acl`, `acomp`, `advcl`, `advmod`, `agent`, `amod`, `appos`, `attr`, `aux`, `auxpass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `csubj`, `csubjpass`, `dative`, `dep`, `det`, `dobj`, `expl`, `intj`, `mark`, `meta`, `neg`, `nmod`, `npadvmod`, `nsubj`, `nsubjpass`, `nummod`, `oprd`, `parataxis`, `pcomp`, `pobj`, `poss`, `preconj`, `predet`, `prep`, `prt`, `punct`, `quantmod`, `relcl`, `xcomp` | | **`ner`** | `CARDINAL`, `DATE`, `EVENT`, `FAC`, `GPE`, `LANGUAGE`, `LAW`, `LOC`, `MONEY`, `NORP`, `ORDINAL`, `ORG`, `PERCENT`, `PERSON`, `PRODUCT`, `QUANTITY`, `TIME`, `WORK_OF_ART` | </details> ### Accuracy | Type | Score | | --- | --- | | `TOKEN_ACC` | 99.86 | | `TOKEN_P` | 99.57 | | `TOKEN_R` | 99.58 | | `TOKEN_F` | 99.57 | | `TAG_ACC` | 97.33 | | `SENTS_P` | 92.21 | | `SENTS_R` | 89.37 | | `SENTS_F` | 90.77 | | `DEP_UAS` | 92.05 | | `DEP_LAS` | 90.23 | | `ENTS_P` | 84.94 | | `ENTS_R` | 85.49 | | `ENTS_F` | 85.22 |
momo/polyglot-ko-12.8b-Chat-QLoRA-Merge
momo
"2023-10-03T09:26:08Z"
1,312
2
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-02T08:22:58Z"
--- license: apache-2.0 language: - ko --- ## Model Details **Model Developers** Yunho Mo (momo) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** polyglot-ko-12.8b-Chat-QLoRA-Merge is an auto-regressive language model based on the polyglot-ko-12.8b transformer architecture. **Base Model** [polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) **Training Dataset** I use [KOpen-platypus](https://huggingface.co/datasets/kyujinpy/KOpen-platypus), [ko-lima](https://huggingface.co/datasets/taeshahn/ko-lima), [EverythingLM-data-V2-Ko](https://huggingface.co/datasets/ziozzang/EverythingLM-data-V2-Ko).
momo/polyglot-ko-12.8b-Orca-Chat-QLoRA-Merge-v2
momo
"2023-10-08T14:09:05Z"
1,312
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-06T02:50:52Z"
Entry not found
mncai/Mistral-7B-v0.1-combine-1k
mncai
"2023-10-22T06:02:24Z"
1,312
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "MindsAndCompany", "en", "ko", "dataset:DopeorNope/combined", "arxiv:2306.02707", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-22T05:30:45Z"
--- pipeline_tag: text-generation license: mit language: - en - ko library_name: transformers tags: - MindsAndCompany datasets: - DopeorNope/combined --- ## Model Details * **Developed by**: [Minds And Company](https://mnc.ai/) * **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) ## Dataset Details ### Used Datasets - - DopeorNope/combined ### Prompt Template - Llama Prompt Template ## Limitations & Biases: Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ## License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. ## Contact Us - [Minds And Company](https://mnc.ai/) ## Citiation: Please kindly cite using the following BibTeX: ```bibtex @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{Orca-best, title = {Orca-best: A filtered version of orca gpt4 dataset.}, author = {Shahul Es}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/}, } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ``` > Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1)
oopsung/llama2-7b-ko-Orcapus-test-v1
oopsung
"2023-11-30T13:42:13Z"
1,312
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-30T13:33:34Z"
Entry not found
DopeorNope/Yi_lee-v1-6B
DopeorNope
"2023-12-05T06:46:39Z"
1,312
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-04T12:05:14Z"
Entry not found
swap-uniba/LLaMAntino-2-70b-hf-UltraChat-ITA
swap-uniba
"2024-05-10T11:45:18Z"
1,312
11
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "text-generation-inference", "it", "arxiv:2312.09993", "license:llama2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-02-02T16:08:18Z"
--- license: llama2 language: - it tags: - text-generation-inference --- <img src="https://i.ibb.co/6mHSRm3/llamantino53.jpg" alt="llamantino53" border="0" width="200px"> # LLaMAntino-2-70b-hf-UltraChat-ITA 🇮🇹 🌟 *Last Update: 02/02/2024*<br> <hr> ## Model description <!-- Provide a quick summary of what the model is/does. --> **LLaMAntino-2-70b-hf-UltraChat-ITA** is a *Large Language Model (LLM)* that is an instruction-tuned version of **LLaMAntino-2-70b** (an italian-adapted **LLaMA 2 - 70B**). This model aims to provide Italian NLP researchers with an improved model for italian dialogue use cases. The model was trained using *QLora* and using as training data [UltraChat](https://github.com/thunlp/ultrachat) translated to the italian language using [Argos Translate](https://pypi.org/project/argostranslate/1.4.0/). If you are interested in more details regarding the training procedure, you can find the code we used at the following link: - **Repository:** https://github.com/swapUniba/LLaMAntino **NOTICE**: the code has not been released yet, we apologize for the delay, it will be available asap! - **Developed by:** Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro - **Funded by:** PNRR project FAIR - Future AI Research - **Compute infrastructure:** [Leonardo](https://www.hpc.cineca.it/systems/hardware/leonardo/) supercomputer - **Model type:** LLaMA-2 - **Language(s) (NLP):** Italian - **License:** Llama 2 Community License - **Finetuned from model:** [swap-uniba/meta-llama/Llama-2-70b-hf](https://huggingface.co/meta-llama/Llama-2-70b-hf) ## Prompt Format This prompt format based on the [LLaMA 2 prompt template](https://gpus.llm-utils.org/llama-2-prompt-template/) adapted to the italian language was used: ```python " [INST] <<SYS>>\n" \ "Sei un assistente disponibile, rispettoso e onesto di nome Llamantino. " \ "Rispondi sempre nel modo più utile possibile, pur essendo sicuro. " \ "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \ "Assicurati che le tue risposte siano socialmente imparziali e positive. " \ "Se una domanda non ha senso o non è coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \ "Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \ "<</SYS>>\n\n" \ f"{user_msg_1} [/INST] {model_answer_1} </s> <s> [INST] {user_msg_2} [/INST] {model_answer_2} </s> ... <s> [INST] {user_msg_N} [/INST] {model_answer_N} </s>" ``` We recommend using the same prompt in inference to obtain the best results! ## How to Get Started with the Model Below you can find an example of model usage: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import transformers import torch import os os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3" model = "swap-uniba/LLaMAntino-2-70b-hf-UltraChat-ITA" tokenizer = AutoTokenizer.from_pretrained(model) tokenizer.add_special_tokens({"pad_token":"<unk>"}) tokenizer.chat_template = "{% set ns = namespace(i=0) %}" \ "{% for message in messages %}" \ "{% if message['role'] == 'user' and ns.i == 0 %}" \ "{{ bos_token +' [INST] <<SYS>>\n' }}" \ "{{ 'Sei un assistente disponibile, rispettoso e onesto di nome Llamantino. ' }}" \ "{{ 'Rispondi sempre nel modo più utile possibile, pur essendo sicuro. ' }}" \ "{{ 'Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. ' }}" \ "{{ 'Assicurati che le tue risposte siano socialmente imparziali e positive. ' }}" \ "{{ 'Se una domanda non ha senso o non è coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. ' }}" \ "{{ 'Se non conosci la risposta a una domanda, non condividere informazioni false.\n' }}" \ "{{ '<</SYS>>\n\n' }}" \ "{{ message['content'] + ' [/INST]' }}" \ "{% elif message['role'] == 'user' and ns.i != 0 %} " \ "{{ bos_token + ' [INST] ' + message['content'] + ' [/INST]' }}" \ "{% elif message['role'] == 'assistant' %}" \ "{{ ' ' + message['content'] + ' ' + eos_token + ' ' }}" \ "{% endif %}" \ "{% set ns.i = ns.i+1 %}" \ "{% endfor %}" model = AutoModelForCausalLM.from_pretrained( model, torch_dtype=torch.float16, device_map='balanced', use_flash_attention_2=True ) pipe = transformers.pipeline(model=model, device_map="balanced", tokenizer=tokenizer, return_full_text=False, # langchain expects the full text task='text-generation', max_new_tokens=512, # max number of tokens to generate in the output temperature=0.7 #temperature ) messages = [{"role": "user", "content": "Cosa sono i word embeddings?"}] text = tokenizer.apply_chat_template(messages, tokenize=False) sequences = pipe(text) for seq in sequences: print(f"{seq['generated_text']}") ``` If you are facing issues when loading the model, you can try to load it **Quantized**: ```python model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True) ``` *Note*: 1) The model loading strategy above requires the [*bitsandbytes*](https://pypi.org/project/bitsandbytes/) and [*accelerate*](https://pypi.org/project/accelerate/) libraries 2) The Tokenizer, by default, adds at the beginning of the prompt the '\<BOS\>' token. If that is not the case, add as a starting token the *\<s\>* string. ## Evaluation For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). Here's a breakdown of the performance metrics: | Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------------|:----------------------|:----------------|:---------------------|:--------| | **Accuracy Normalized** | 0.6566 | 0.5004 | 0.6084 | 0.588 | ## Citation <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> If you use this model in your research, please cite the following: ```bibtex @misc{basile2023llamantino, title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro}, year={2023}, eprint={2312.09993}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` *Notice:* Llama 2 is licensed under the LLAMA 2 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved. [*License*](https://ai.meta.com/llama/license/)
openlynn/Llama-3-Soliloquy-8B-v2
openlynn
"2024-05-03T15:23:06Z"
1,312
56
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "en", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-26T00:02:51Z"
--- license: cc-by-nc-sa-4.0 language: - en --- # LYNN - AI for Roleplay <img src="./reallynn.png" alt="it's lynn!" width="340"/> > [!TIP] > No issue found... yet.. # Soliloquy-L3 Soliloquy-L3 is a highly capable roleplaying model designed for immersive, dynamic experiences. Trained on over 250 million tokens of roleplaying data, Soliloquy-L3 has a vast knowledge base, rich literary expression, and support for up to 24k context length. It outperforms existing ~13B models, delivering enhanced roleplaying capabilities. ## What's Changed - 100% Retrieval - Better Instruction Following ## Model Info | Context Length | Parameter | Prompt Template | isErp | | --- | --- | --- | --- | | 24k(24576) | 8B | Llama 3 Chat | Partly | ## Prompt Template Use can you following jinja2 template. Which is identical to chat_template in [tokenizer_config](./tokenizer_config.json). ``` {% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}{% if add_generation_prompt %}{{ '<|start_header_id|>assistant<|end_header_id|>\n\n' }}{% endif %} ``` ## License This model is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/) If you would like to use this model for commercial purposes, please use our proprietary API. (Currently avilable at OpenRouter) For non-commercial use, please adhere to the terms of the CC BY-NC-SA 4.0 license. You are free to share and adapt the model for non-commercial purposes, provided you give appropriate credit, indicate if changes were made, and do not imply endorsement by the licensor. For more information about the CC BY-NC 4.0 license, please visit: https://creativecommons.org/licenses/by-nc-sa/4.0/ If you have any questions or would like to inquire about licensing, please contact us. ## Llama 3 Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Join our Discord [**Join LYNN Discord**](https://discord.gg/xuZVqUyG4Y)
aismlv/wav2vec2-large-xlsr-kazakh
aismlv
"2023-12-20T23:33:31Z"
1,311
8
transformers
[ "transformers", "pytorch", "jax", "safetensors", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "kk", "dataset:kazakh_speech_corpus", "base_model:facebook/wav2vec2-large-xlsr-53", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2022-03-02T23:29:05Z"
--- language: kk license: apache-2.0 tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week datasets: - kazakh_speech_corpus metrics: - wer base_model: facebook/wav2vec2-large-xlsr-53 model-index: - name: Wav2Vec2-XLSR-53 Kazakh by adilism results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: name: Kazakh Speech Corpus v1.1 type: kazakh_speech_corpus args: kk metrics: - type: wer value: 19.65 name: Test WER --- # Wav2Vec2-Large-XLSR-53-Kazakh Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) for Kazakh ASR using the [Kazakh Speech Corpus v1.1](https://issai.nu.edu.kz/kz-speech-corpus/?version=1.1) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor from utils import get_test_dataset test_dataset = get_test_dataset("ISSAI_KSC_335RS_v1.1") processor = Wav2Vec2Processor.from_pretrained("wav2vec2-large-xlsr-kazakh") model = Wav2Vec2ForCTC.from_pretrained("wav2vec2-large-xlsr-kazakh") # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the test set of [Kazakh Speech Corpus v1.1](https://issai.nu.edu.kz/kz-speech-corpus/?version=1.1). To evaluate, download the [archive](https://www.openslr.org/resources/102/ISSAI_KSC_335RS_v1.1_flac.tar.gz), untar and pass the path to data to `get_test_dataset` as below: ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re from utils import get_test_dataset test_dataset = get_test_dataset("ISSAI_KSC_335RS_v1.1") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("adilism/wav2vec2-large-xlsr-kazakh") model = Wav2Vec2ForCTC.from_pretrained("adilism/wav2vec2-large-xlsr-kazakh") model.to("cuda") # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = torchaudio.transforms.Resample(sampling_rate, 16_000)(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) def evaluate(batch): inputs = processor(batch["text"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 19.65% ## Training The Kazakh Speech Corpus v1.1 `train` dataset was used for training.
cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token
cambridgeltl
"2023-06-14T19:02:41Z"
1,311
0
transformers
[ "transformers", "pytorch", "jax", "safetensors", "bert", "feature-extraction", "arxiv:2010.11784", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
feature-extraction
"2022-03-02T23:29:05Z"
--- language: en tags: - biomedical - lexical-semantics datasets: - UMLS **[news]** A cross-lingual extension of SapBERT will appear in the main onference of **ACL 2021**! <br> **[news]** SapBERT will appear in the conference proceedings of **NAACL 2021**! ### SapBERT-PubMedBERT SapBERT by [Liu et al. (2020)](https://arxiv.org/pdf/2010.11784.pdf). Trained with [UMLS](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) 2020AA (English only), using [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) as the base model. Please use the mean-pooling of the output as the representation. #### Extracting embeddings from SapBERT The following script converts a list of strings (entity names) into embeddings. ```python import numpy as np import torch from tqdm.auto import tqdm from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token") model = AutoModel.from_pretrained("cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token").cuda() # replace with your own list of entity names all_names = ["covid-19", "Coronavirus infection", "high fever", "Tumor of posterior wall of oropharynx"] bs = 128 # batch size during inference all_embs = [] for i in tqdm(np.arange(0, len(all_names), bs)): toks = tokenizer.batch_encode_plus(all_names[i:i+bs], padding="max_length", max_length=25, truncation=True, return_tensors="pt") toks_cuda = {} for k,v in toks.items(): toks_cuda[k] = v.cuda() cls_rep = model(**toks_cuda)[0].mean(1)# use mean pooling representation as the embedding all_embs.append(cls_rep.cpu().detach().numpy()) all_embs = np.concatenate(all_embs, axis=0) ``` For more details about training and eval, see SapBERT [github repo](https://github.com/cambridgeltl/sapbert). ### Citation ```bibtex @inproceedings{liu-etal-2021-self, title = "Self-Alignment Pretraining for Biomedical Entity Representations", author = "Liu, Fangyu and Shareghi, Ehsan and Meng, Zaiqiao and Basaldella, Marco and Collier, Nigel", booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jun, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.naacl-main.334", pages = "4228--4238", abstract = "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge. This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal. To address this challenge, we propose SapBERT, a pretraining scheme that self-aligns the representation space of biomedical entities. We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts. In contrast with previous pipeline-based hybrid systems, SapBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets. In the scientific domain, we achieve SOTA even without task-specific supervision. With substantial improvement over various domain-specific pretrained MLMs such as BioBERT, SciBERTand and PubMedBERT, our pretraining scheme proves to be both effective and robust.", } ```
izumi-lab/electra-base-japanese-generator
izumi-lab
"2023-10-21T13:21:16Z"
1,311
0
transformers
[ "transformers", "pytorch", "safetensors", "electra", "fill-mask", "ja", "dataset:wikipedia", "arxiv:2003.10555", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2022-03-02T23:29:05Z"
--- language: ja license: cc-by-sa-4.0 datasets: - wikipedia widget: - text: 東京大学で[MASK]の研究をしています。 --- # ELECTRA base Japanese generator This is a [ELECTRA](https://github.com/google-research/electra) model pretrained on texts in the Japanese language. The codes for the pretraining are available at [retarfi/language-pretraining](https://github.com/retarfi/language-pretraining/tree/v1.0). ## Model architecture The model architecture is the same as ELECTRA base in the [original ELECTRA implementation](https://github.com/google-research/electra); 12 layers, 256 dimensions of hidden states, and 4 attention heads. ## Training Data The models are trained on the Japanese version of Wikipedia. The training corpus is generated from the Japanese version of Wikipedia, using Wikipedia dump file as of June 1, 2021. The corpus file is 2.9GB, consisting of approximately 20M sentences. ## Tokenization The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm. The vocabulary size is 32768. ## Training The models are trained with the same configuration as ELECTRA base in the [original ELECTRA paper](https://arxiv.org/abs/2003.10555) except size; 512 tokens per instance, 256 instances per batch, and 766k training steps. The size of the generator is 1/3 of the size of the discriminator. ## Citation ``` @article{Suzuki-etal-2023-ipm, title = {Constructing and analyzing domain-specific language model for financial text mining} author = {Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi}, journal = {Information Processing & Management}, volume = {60}, number = {2}, pages = {103194}, year = {2023}, doi = {10.1016/j.ipm.2022.103194} } ``` ## Licenses The pretrained models are distributed under the terms of the [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). ## Acknowledgments This work was supported by JSPS KAKENHI Grant Number JP21K12010.
bigscience/mt0-xl
bigscience
"2024-03-17T15:07:06Z"
1,311
28
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:bigscience/xP3", "dataset:mc4", "arxiv:2211.01786", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-10-27T20:55:06Z"
--- datasets: - bigscience/xP3 - mc4 license: apache-2.0 language: - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - 'no' - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu pipeline_tag: text2text-generation widget: - text: >- 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative? example_title: zh-en sentiment - text: 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? example_title: zh-zh sentiment - text: Suggest at least five related search terms to "Mạng neural nhân tạo". example_title: vi-en query - text: >- Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels». example_title: fr-fr query - text: Explain in a sentence in Telugu what is backpropagation in neural networks. example_title: te-en qa - text: Why is the sky blue? example_title: en-en qa - text: >- Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): example_title: es-en fable - text: >- Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is "Violence is the last refuge of the incompetent". Fable (in Hindi): example_title: hi-en fable model-index: - name: mt0-xl results: - task: type: Coreference resolution dataset: type: winogrande name: Winogrande XL (xl) config: xl split: validation revision: a80f460359d1e9a67c006011c94de42a8759430c metrics: - type: Accuracy value: 52.49 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (en) config: en split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 61.89 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (fr) config: fr split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 59.04 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (jp) config: jp split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 60.27 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 66.16 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (ru) config: ru split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 59.05 - task: type: Coreference resolution dataset: type: Muennighoff/xwinograd name: XWinograd (zh) config: zh split: test revision: 9dd5ea5505fad86b7bedad667955577815300cee metrics: - type: Accuracy value: 62.9 - task: type: Natural language inference dataset: type: anli name: ANLI (r1) config: r1 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 38.2 - task: type: Natural language inference dataset: type: anli name: ANLI (r2) config: r2 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 34.8 - task: type: Natural language inference dataset: type: anli name: ANLI (r3) config: r3 split: validation revision: 9dbd830a06fea8b1c49d6e5ef2004a08d9f45094 metrics: - type: Accuracy value: 39 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (cb) config: cb split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 85.71 - task: type: Natural language inference dataset: type: super_glue name: SuperGLUE (rte) config: rte split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 78.7 - task: type: Natural language inference dataset: type: xnli name: XNLI (ar) config: ar split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 51.85 - task: type: Natural language inference dataset: type: xnli name: XNLI (bg) config: bg split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.18 - task: type: Natural language inference dataset: type: xnli name: XNLI (de) config: de split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (el) config: el split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (en) config: en split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 56.83 - task: type: Natural language inference dataset: type: xnli name: XNLI (es) config: es split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.78 - task: type: Natural language inference dataset: type: xnli name: XNLI (fr) config: fr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 54.22 - task: type: Natural language inference dataset: type: xnli name: XNLI (hi) config: hi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.24 - task: type: Natural language inference dataset: type: xnli name: XNLI (ru) config: ru split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.09 - task: type: Natural language inference dataset: type: xnli name: XNLI (sw) config: sw split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 49.6 - task: type: Natural language inference dataset: type: xnli name: XNLI (th) config: th split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 52.13 - task: type: Natural language inference dataset: type: xnli name: XNLI (tr) config: tr split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.56 - task: type: Natural language inference dataset: type: xnli name: XNLI (ur) config: ur split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 47.91 - task: type: Natural language inference dataset: type: xnli name: XNLI (vi) config: vi split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 53.21 - task: type: Natural language inference dataset: type: xnli name: XNLI (zh) config: zh split: validation revision: a5a45e4ff92d5d3f34de70aaf4b72c3bdf9f7f16 metrics: - type: Accuracy value: 50.64 - task: type: Program synthesis dataset: type: openai_humaneval name: HumanEval config: None split: test revision: e8dc562f5de170c54b5481011dd9f4fa04845771 metrics: - type: Pass@1 value: 0 - type: Pass@10 value: 0 - type: Pass@100 value: 0 - task: type: Sentence completion dataset: type: story_cloze name: StoryCloze (2016) config: '2016' split: validation revision: e724c6f8cdf7c7a2fb229d862226e15b023ee4db metrics: - type: Accuracy value: 79.1 - task: type: Sentence completion dataset: type: super_glue name: SuperGLUE (copa) config: copa split: validation revision: 9e12063561e7e6c79099feb6d5a493142584e9e2 metrics: - type: Accuracy value: 72 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (et) config: et split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 70 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ht) config: ht split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 66 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (id) config: id split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 71 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (it) config: it split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 70 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (qu) config: qu split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 56 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (sw) config: sw split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 53 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (ta) config: ta split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 64 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (th) config: th split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 60 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (tr) config: tr split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 58 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (vi) config: vi split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 68 - task: type: Sentence completion dataset: type: xcopa name: XCOPA (zh) config: zh split: validation revision: 37f73c60fb123111fa5af5f9b705d0b3747fd187 metrics: - type: Accuracy value: 65 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ar) config: ar split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 70.09 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (es) config: es split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 77.17 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (eu) config: eu split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 69.03 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (hi) config: hi split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 71.08 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (id) config: id split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 75.71 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (my) config: my split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 65.65 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (ru) config: ru split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 74.85 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (sw) config: sw split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 71.14 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (te) config: te split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 68.89 - task: type: Sentence completion dataset: type: Muennighoff/xstory_cloze name: XStoryCloze (zh) config: zh split: validation revision: 8bb76e594b68147f1a430e86829d07189622b90d metrics: - type: Accuracy value: 72.93 --- ![xmtf](https://github.com/bigscience-workshop/xmtf/blob/master/xmtf_banner.png?raw=true) # Table of Contents 1. [Model Summary](#model-summary) 2. [Use](#use) 3. [Limitations](#limitations) 4. [Training](#training) 5. [Evaluation](#evaluation) 7. [Citation](#citation) # Model Summary > We present BLOOMZ & mT0, a family of models capable of following human instructions in dozens of languages zero-shot. We finetune BLOOM & mT5 pretrained multilingual language models on our crosslingual task mixture (xP3) and find our resulting models capable of crosslingual generalization to unseen tasks & languages. - **Repository:** [bigscience-workshop/xmtf](https://github.com/bigscience-workshop/xmtf) - **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786) - **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) - **Languages:** Refer to [mc4](https://huggingface.co/datasets/mc4) for pretraining & [xP3](https://huggingface.co/bigscience/xP3) for finetuning language proportions. It understands both pretraining & finetuning languages. - **BLOOMZ & mT0 Model Family:** <div class="max-w-full overflow-auto"> <table> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3>xP3</a>. Recommended for prompting in English. </tr> <tr> <td>Parameters</td> <td>300M</td> <td>580M</td> <td>1.2B</td> <td>3.7B</td> <td>13B</td> <td>560M</td> <td>1.1B</td> <td>1.7B</td> <td>3B</td> <td>7.1B</td> <td>176B</td> </tr> <tr> <td>Finetuned Model</td> <td><a href=https://huggingface.co/bigscience/mt0-small>mt0-small</a></td> <td><a href=https://huggingface.co/bigscience/mt0-base>mt0-base</a></td> <td><a href=https://huggingface.co/bigscience/mt0-large>mt0-large</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xl>mt0-xl</a></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl>mt0-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-560m>bloomz-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b1>bloomz-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-1b7>bloomz-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-3b>bloomz-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1>bloomz-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloomz>bloomz</a></td> </tr> </tr> <tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/bigscience/xP3mt>xP3mt</a>. Recommended for prompting in non-English.</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-mt>bloomz-7b1-mt</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-mt>bloomz-mt</a></td> </tr> <th colspan="12">Multitask finetuned on <a style="font-weight:bold" href=https://huggingface.co/datasets/Muennighoff/P3>P3</a>. Released for research purposes only. Strictly inferior to above models!</th> </tr> <tr> <td>Finetuned Model</td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td> <td></td> <td></td> <td></td> <td></td> <td><a href=https://huggingface.co/bigscience/bloomz-7b1-p3>bloomz-7b1-p3</a></td> <td><a href=https://huggingface.co/bigscience/bloomz-p3>bloomz-p3</a></td> </tr> <th colspan="12">Original pretrained checkpoints. Not recommended.</th> <tr> <td>Pretrained Model</td> <td><a href=https://huggingface.co/google/mt5-small>mt5-small</a></td> <td><a href=https://huggingface.co/google/mt5-base>mt5-base</a></td> <td><a href=https://huggingface.co/google/mt5-large>mt5-large</a></td> <td><a href=https://huggingface.co/google/mt5-xl>mt5-xl</a></td> <td><a href=https://huggingface.co/google/mt5-xxl>mt5-xxl</a></td> <td><a href=https://huggingface.co/bigscience/bloom-560m>bloom-560m</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b1>bloom-1b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom-1b7>bloom-1b7</a></td> <td><a href=https://huggingface.co/bigscience/bloom-3b>bloom-3b</a></td> <td><a href=https://huggingface.co/bigscience/bloom-7b1>bloom-7b1</a></td> <td><a href=https://huggingface.co/bigscience/bloom>bloom</a></td> </tr> </table> </div> # Use ## Intended use We recommend using the model to perform tasks expressed in natural language. For example, given the prompt "*Translate to English: Je t’aime.*", the model will most likely answer "*I love you.*". Some prompt ideas from our paper: - 一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? - Suggest at least five related search terms to "Mạng neural nhân tạo". - Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): - Explain in a sentence in Telugu what is backpropagation in neural networks. **Feel free to share your generations in the Community tab!** ## How to use ### CPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-xl" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-xl" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, torch_dtype="auto", device_map="auto") inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> ### GPU in 8bit <details> <summary> Click to expand </summary> ```python # pip install -q transformers accelerate bitsandbytes from transformers import AutoModelForSeq2SeqLM, AutoTokenizer checkpoint = "bigscience/mt0-xl" tokenizer = AutoTokenizer.from_pretrained(checkpoint) model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True) inputs = tokenizer.encode("Translate to English: Je t’aime.", return_tensors="pt").to("cuda") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` </details> <!-- Necessary for whitespace --> ### # Limitations **Prompt Engineering:** The performance may vary depending on the prompt. For BLOOMZ models, we recommend making it very clear when the input stops to avoid the model trying to continue it. For example, the prompt "*Translate to English: Je t'aime*" without the full stop (.) at the end, may result in the model trying to continue the French sentence. Better prompts are e.g. "*Translate to English: Je t'aime.*", "*Translate to English: Je t'aime. Translation:*" "*What is "Je t'aime." in English?*", where it is clear for the model when it should answer. Further, we recommend providing the model as much context as possible. For example, if you want it to answer in Telugu, then tell the model, e.g. "*Explain in a sentence in Telugu what is backpropagation in neural networks.*". # Training ## Model - **Architecture:** Same as [mt5-xl](https://huggingface.co/google/mt5-xl), also refer to the `config.json` file - **Finetuning steps:** 10000 - **Finetuning tokens:** 1.85 billion - **Precision:** bfloat16 ## Hardware - **TPUs:** TPUv4-128 ## Software - **Orchestration:** [T5X](https://github.com/google-research/t5x) - **Neural networks:** [Jax](https://github.com/google/jax) # Evaluation We refer to Table 7 from our [paper](https://arxiv.org/abs/2211.01786) & [bigscience/evaluation-results](https://huggingface.co/datasets/bigscience/evaluation-results) for zero-shot results on unseen tasks. The sidebar reports zero-shot performance of the best prompt per dataset config. # Citation ```bibtex @article{muennighoff2022crosslingual, title={Crosslingual generalization through multitask finetuning}, author={Muennighoff, Niklas and Wang, Thomas and Sutawika, Lintang and Roberts, Adam and Biderman, Stella and Scao, Teven Le and Bari, M Saiful and Shen, Sheng and Yong, Zheng-Xin and Schoelkopf, Hailey and others}, journal={arXiv preprint arXiv:2211.01786}, year={2022} } ```
sue3489/test1_kullm-polyglot-5.8b-v2-koalpaca-v1.1b
sue3489
"2023-10-05T02:21:21Z"
1,311
0
transformers
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "ko", "dataset:beomi/KoAlpaca-v1.1a", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-31T08:18:53Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: test1_kullm-polyglot-5.8b-v2-koalpaca-v1.1b results: [] datasets: - beomi/KoAlpaca-v1.1a language: - ko --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test1_kullm-polyglot-5.8b-v2-koalpaca-v1.1b This model is a fine-tuned version of [nlpai-lab/kullm-polyglot-5.8b-v2](https://huggingface.co/nlpai-lab/kullm-polyglot-5.8b-v2) on beomi/KoAlpaca-v1.1a dataset. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu117 - Datasets 2.14.4 - Tokenizers 0.13.3
krevas/LDCC-Instruct-Llama-2-ko-13B-v4.2.4
krevas
"2023-10-28T05:34:08Z"
1,311
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-28T05:13:13Z"
--- license: cc-by-nc-4.0 ---
nakhyeon/polyglot-ko-12b-qlora
nakhyeon
"2023-11-04T05:26:13Z"
1,311
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-04T04:56:45Z"
--- license: mit ---
Kaeri-Jenti/LDCC-with-korca
Kaeri-Jenti
"2023-11-06T00:50:24Z"
1,311
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-06T00:35:56Z"
--- license: llama2 ---
lIlBrother/llama2-merge-v0.2
lIlBrother
"2023-11-10T14:21:42Z"
1,311
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-10T13:26:47Z"
Entry not found
Minirecord/Mini_DPO_7b_01
Minirecord
"2023-12-01T00:23:46Z"
1,311
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-29T10:11:42Z"
--- license: cc-by-sa-4.0 ---
Ja-ck/Mistral-instruct-Y24-v6
Ja-ck
"2023-12-04T07:37:47Z"
1,311
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-04T07:30:19Z"
--- license: apache-2.0 language: - ko pipeline_tag: text-generation --- ## Prompt: ChatML
oopsung/Yi-Ko-6B-Exo-test-v1
oopsung
"2023-12-06T08:59:57Z"
1,311
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-06T08:53:41Z"
Entry not found
Yntec/aMovieTrend
Yntec
"2023-09-17T07:35:10Z"
1,310
2
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Ciro_Negrogni", "MagicArt35", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-09-16T19:50:43Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Ciro_Negrogni - MagicArt35 --- # A Movie Trend AmovieX by MagicArt35 with the Photographic Trend LoRA by Ciro_Negrogni baked in. Second version of three with AmovieX's compositions. First version: https://huggingface.co/Yntec/aPhotographicTrend Third version with Photographic Trend's compositions: https://huggingface.co/Yntec/Trending Samples and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/aEJ2EDQHClPYxsv-bVSvm.png) Pretty Cute Girl Photorealistic, highly detailed, masterpiece, trending on ArtStation, sitting, Detailed Chibi Eyes, fantasy, beautiful detailed legs, streetwear, gorgeous detailed hair, hat, Magazine ad, iconic, 1943, from the movie, sharp focus. ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/7NyOTvQizmMBsnrnTlnL1.png) Cartoon CUTE LITTLE baby, CHIBI, gorgeous detailed hair, looking, cute socks, holding pillow, skirt, Magazine ad, iconic, 1940, sharp focus. pencil art By KlaysMoji and Clay Mann and and leyendecker and Dave Rapoza. Original pages: https://civitai.com/models/98543 (Photographic Trend) https://civitai.com/models/94687/photo-movie-x (AmovieX) # Recipe - Merge Photographic Trend LoRA to checkpoint 1.0 Model A: AmovieX OutPut: PhotographicTrendAmovieX - SuperMerger Weight sum Train Difference use MBW 1,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1 Model A: PhotographicTrendAmovieX Model B: AmovieX OutPut: aMovieTrend
mncai/Mistral-7B-v0.1-orca-1k
mncai
"2023-10-22T04:33:28Z"
1,310
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "MindsAndCompany", "en", "ko", "dataset:kyujinpy/OpenOrca-KO", "arxiv:2306.02707", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-22T04:19:01Z"
--- pipeline_tag: text-generation license: mit language: - en - ko library_name: transformers tags: - MindsAndCompany datasets: - kyujinpy/OpenOrca-KO --- ## Model Details * **Developed by**: [Minds And Company](https://mnc.ai/) * **Backbone Model**: [Mistral-7B-v0.1](mistralai/Mistral-7B-v0.1) * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers) ## Dataset Details ### Used Datasets - kyujinpy/OpenOrca-KO ### Prompt Template - Llama Prompt Template ## Limitations & Biases: Llama2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ ## License Disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. ## Contact Us - [Minds And Company](https://mnc.ai/) ## Citiation: Please kindly cite using the following BibTeX: ```bibtex @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @misc{Orca-best, title = {Orca-best: A filtered version of orca gpt4 dataset.}, author = {Shahul Es}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/datasets/shahules786/orca-best/}, } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ``` > Readme format: [Riiid/sheep-duck-llama-2-70b-v1.1](https://huggingface.co/Riiid/sheep-duck-llama-2-70b-v1.1)
kyujinpy/KOR-Orca-Platypus-13B-v2
kyujinpy
"2023-11-10T16:40:13Z"
1,310
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-10T06:56:20Z"
--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **🐳KOR-Orca-Platypus-13B🐳** ![img](./Korean-OpenOrca.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca) **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I use [kyujinpy/KOR-OpenOrca-Platypus-v3(private! wait!)](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). I use A100 GPU 40GB and COLAB, when trianing. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | [KOR-Orca-Platypus-13B🐳] | 46.59 | 42.06 | 53.95 | 42.28 | 43.55 | 51.12 | | KOR-Orca-Platypus-13B🐳-v2 | 49.48 | 44.03 | 54.43 | 42.23 | 41.64 | 65.05 | > Compare with Top 4 SOTA models. (update: 10/09) # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KOR-Orca-Platypus-13B-v2" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
oopsung/llama2-7b-ko-wiki-test-v1
oopsung
"2023-12-05T09:41:56Z"
1,310
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-05T09:35:23Z"
Entry not found
oopsung/Yi-Ko-6B-tech-test-v1
oopsung
"2023-12-06T18:15:31Z"
1,310
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-06T18:08:42Z"
Entry not found
We-Want-GPU/Yi-Ko-6B-orca-alpaca-gpt4-math
We-Want-GPU
"2023-12-15T15:37:52Z"
1,310
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-15T15:31:53Z"
Entry not found
yanolja/KoSOLAR-10.7B-v0.1-deprecated
yanolja
"2024-01-05T14:58:10Z"
1,310
20
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:upstage/SOLAR-10.7B-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-28T08:38:07Z"
--- license: apache-2.0 base_model: upstage/SOLAR-10.7B-v1.0 tags: - generated_from_trainer model-index: - name: yanolja/KoSOLAR-10.7B-v0.1 results: [] --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) ## Discord If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: https://discord.gg/b27bAHg95m. # Caution This model is **DEPRECATED** due to an issue with the tokenizer. A new, corrected version will be uploaded shortly. We strongly advise against fine-tuning this model until the updated version is available. Details for the new version will be provided in a separate model card. # yanolja/KoSOLAR-10.7B-v0.1 This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), specifically pre-trained on various Korean web-crawled datasets available on HuggingFace. Our approach was to expand the model's understanding of Korean by pre-training the embeddings for new tokens while preserving the original parameters of the base model. ## Model Description Most parameters of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) were kept frozen during our training process. Only the embeddings for the newly added Korean tokens in the `embed_tokens` layer and the `lm_head` layer were pre-trained. This approach allowed us to enhance the model's performance in Korean while maintaining its original English capabilities. ## Intended Uses & Limitations No instruction tuning has been performed on this model. We recommend further training for specific purposes with caution, as it was primarily enhanced for Korean language understanding. ## Training and Evaluation Data The model was pre-trained on various Korean web-crawled datasets openly available on HuggingFace. ## Training Procedure ### Clarification on "Pre-trained" It's essential to understand what "pre-trained" means in the context of this model. While the base model was already pre-trained on a broad, non-task-specific corpus of data, we further pre-trained only the embeddings for the expanded Korean vocabulary. This means that we did not alter the other existing parameters from the base model at all. This approach ensures a robust understanding of both English and Korean. ### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training Results #### upstage/SOLAR-10.7B-v1.0 | Groups | Version | Filter | n-shot | Metric | Value | | Stderr | |-------------|---------|-----------|--------|-------------|--------|-----|--------| | kmmlu | N/A | none | 0 | acc | 0.3004 | ± | 0.0528 | | gsm8k | Yaml | get-answer| 5 | exact_match | 0.5625 | ± | 0.0137 | | hellaswag | Yaml | none | 0 | acc | 0.6393 | ± | 0.0048 | | mmlu | N/A | none | 0 | acc | 0.6305 | ± | 0.1452 | | truthfulqa | N/A | none | 0 | acc | 0.4096 | ± | 0.0467 | | winogrande | Yaml | none | 0 | acc | 0.7443 | ± | 0.0123 | #### yanolja/KoSOLAR-10.7B-v0.1 | Groups | Version | Filter | n-shot | Metric | Value | | Stderr | |-------------|---------|-----------|--------|-------------|--------|-----|--------| | kmmlu | N/A | none | 0 | acc | 0.2948 | ± | 0.0537 | | gsm8k | Yaml | get-answer| 5 | exact_match | 0.5527 | ± | 0.0137 | | hellaswag | Yaml | none | 0 | acc | 0.6392 | ± | 0.0048 | | mmlu | N/A | none | 0 | acc | 0.6303 | ± | 0.1411 | | truthfulqa | N/A | none | 0 | acc | 0.3618 | ± | 0.0472 | | winogrande | Yaml | none | 0 | acc | 0.7459 | ± | 0.0122 | ### Framework Versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
mssongit/Koala-12.8b-v1
mssongit
"2023-06-02T06:46:25Z"
1,309
0
transformers
[ "transformers", "pytorch", "gpt_neox", "feature-extraction", "polyglot-ko", "gpt-neox", "ko", "dataset:beomi/KoAlpaca-v1.1a", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
feature-extraction
"2023-05-26T08:05:35Z"
--- license: apache-2.0 datasets: - beomi/KoAlpaca-v1.1a language: - ko tags: - polyglot-ko - gpt-neox --- This model is a fine-tuned version of [EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2