modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
DopeorNope/love_but_hate-10.7B
DopeorNope
"2024-01-03T17:27:28Z"
1,109
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-03T16:58:30Z"
# Please **Never use my model for merging!!!**
LI-ST/Mistral-7B-ko-v0.002
LI-ST
"2024-01-09T07:35:45Z"
1,109
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "ko", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-08T10:20:42Z"
--- license: cc-by-nc-nd-4.0 language: - en - ko library_name: transformers pipeline_tag: text-generation --- <p><h1>Mistral-7B-ko</h1></p> basemodel: Open-Orca/Mistral-7B-OpenOrca ================================================= <BR> This model is a temporary model for testing. <BR> We will be deleting it soon. <BR> =================================================
Edentns/DataVortexS-10.7B-v0.1
Edentns
"2024-02-24T14:18:49Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:nlpai-lab/kullm-v2", "base_model:hyeogi/SOLAR-10.7B-dpo-v0.1", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-10T12:03:23Z"
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: hyeogi/SOLAR-10.7B-dpo-v0.1 pipeline_tag: text-generation datasets: - nlpai-lab/kullm-v2 --- # **DataVortexS-10.7B-v0.1** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [hyeogi/SOLAR-10.7B-dpo-v0.1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v0.1) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 1ea - **transformers**: v4.36.2 ### **Dataset** - [nlpai-lab/kullm-v2](https://huggingface.co/datasets/nlpai-lab/kullm-v2) ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ ๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค. ### Instruction: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ? ### Response: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค. ### Instruction: ์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -------------: | -----------: | ------------: | -----------: | | kobest_boolq | 0.334282 | 0.642861 | 0.691496 | 0.638754 | | kobest_copa | 0.584962 | 0.564325 | 0.570654 | 0.581035 | | kobest_hellaswag | 0.340022 | 0.339401 | 0.341917 | 0.337713 | | kobest_sentineg | 0.328257 | 0.414905 | 0.464711 | 0.888914 | | **Average** | **0.39688075** | **0.490373** | **0.5171945** | **0.611604** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 35.39 | 28.48 | 39.79 | 35.98 | 44.72 | 27.63 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.1") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.1") messages = [ {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?"}, {"role": "assistant", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
etri-xainlp/llama2-12.8b_lora-dpo_v1
etri-xainlp
"2024-01-17T02:22:50Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-17T01:58:46Z"
--- license: apache-2.0 --- # etri-xainlp/llama2-12.8b_lora-dpo_v1 ## Model Details **Model Developers** ETRI xainlp team **Input** text only. **Output** text only. **Model Architecture** **Base Model** [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) **Training Dataset** - sft+lora: 710k instruction-following set - dpo+lora: 90k user preference set - We use A100 GPU 80GB * 8, when training.
TeamUNIVA/Komodo_6B_v1.0.0
TeamUNIVA
"2024-03-04T09:25:34Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T10:39:06Z"
--- license: apache-2.0 language: - ko - en --- # Base Model beomi/Yi-Ko-6B ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TeamUNIVA/Komodo_6B_v1.0.0" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) text = '''<|system|> ๋‹น์‹ ์€ ์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์— ์นœ์ ˆํ•˜๊ฒŒ ๋‹ต๋ณ€์„ ํ•˜๋Š” ์ฑ—๋ด‡์ž…๋‹ˆ๋‹ค. <|user|> ์•ˆ๋…•ํ•˜์„ธ์š”? <|bot|> ''' inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jungyuko/DAVinCI-Yi-Ko-6B-v0.8
jungyuko
"2024-01-31T08:57:58Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-31T08:13:40Z"
--- license: cc-by-nc-4.0 --- ## DAVinCI-Yi-Ko-6B-v0.8 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an unknown dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hypuerparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 4 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 8 * total_train_batch_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
ITT-AF/ITT-42dot_LLM-PLM-1.3B-dpo-v1.0
ITT-AF
"2024-02-14T06:31:32Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-03T13:23:29Z"
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-PLM-1.3B-dpo-v1.0 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v1.0
mu0gum
"2024-02-14T06:28:03Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-13T16:05:06Z"
--- license: cc-by-nc-4.0 --- # AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v1.0 ๋ฒ ์ด์Šค ๋ชจ๋ธ : 42dot/42dot_LLM-PLM-1.3B ํ•™์Šต ๋ฐ์ดํ„ฐ : ์ž์ฒด ์ œ์ž‘ํ•œ Open Orca ์Šคํƒ€์ผ ๋ฐ์ดํ„ฐ์…‹ ์•ฝ 48,000๊ฑด (์ค‘๋ณต ์ œ๊ฑฐ ๋ฐ ๋ฐ์ดํ„ฐ ๋ถ„ํฌ ์กฐ์ •) ํ•™์Šต ๋ฐฉ๋ฒ• : Full finetuning epoch : 3 ## ko-lm-evaluation-harness(5-shot) |kobest_boolq|kobest_copa|kobest_hellaswag|pawsx_ko| |--|--|--|--| |0.5220797720797721|0.72|0.458|0.563| ## Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
JONGYUN/DPO_Test_2
JONGYUN
"2024-03-05T23:44:19Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-21T04:03:49Z"
--- license: apache-2.0 --- language: - ko pipeline_tag: text-generation --- # Llama-2-7b-hf dpo test model ### Model Details - Developed by: JongYun CHOI - Backbone Model: yanolja/KoSOLAR-10.7B-v0.2 - Library: [transformers](https://github.com/huggingface/transformers) - ### Used Datasets - private dataset ### Prompt Template ``` ### ์งˆ๋ฌธ: {Instruction} ### ๋‹ต๋ณ€: {Answer} ```
ENERGY-DRINK-LOVE/eeve_dpo-v3
ENERGY-DRINK-LOVE
"2024-03-07T02:54:50Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "arxiv:2305.18290", "base_model:yanolja/EEVE-Korean-Instruct-10.8B-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-27T21:57:37Z"
--- license: apache-2.0 tags: - trl - dpo - generated_from_trainer base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ENERGY-DRINK-LOVE/eeve_dpo-v3 ### Our Team * Jingyeom Kim * Youjin Chung ## Model ### Base Model * [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) ### Hardware and Software * Hardware: A100 * 8 for training our model * Deepspeed library & Huggingface TRL Trainer ### Dataset * DPO_dataset * ์ž์ฒด ์ œ์ž‘ dpo dataset(AI-hub dataset ํ™œ์šฉ) * OpenOrca DPO ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹ ๋ฒˆ์—ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์ž์ฒด๋ชจ๋ธ ํ™œ์šฉ) ### Training Method * [DPO](https://arxiv.org/abs/2305.18290) ## Benchmark **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | | :--------------- | -----------: | -----------: | | kobest_boolq | 0.950142 | 0.944444 | | kobest_copa | 0.751 | 0.835 | | kobest_hellaswag | 0.474 | 0.508 | | kobest_sentineg | 0.811083 | 0.972292 | | **Average** | **0.74655625** | **0.81493399** | **[Ko-LLM-Leaderboard](https://www.aihub.or.kr/leaderboard/view.do?currMenu=500&topMenu=102)** * (240307๊ธฐ์ค€ 7๋“ฑ) | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 57.97 | 57.51 | 67.01 | 56.3 | 54.86 | 54.19 |
ITT-AF/ITT-Yi-Ko-6B-v5.0
ITT-AF
"2024-02-29T09:48:33Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-29T08:21:02Z"
--- license: cc-by-nc-4.0 --- ## ITT-AF/ITT-Yi-Ko-6B-v5.0 This model is a fine-tuned version of [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) on an custom dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hypuerparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 4 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 8 * total_train_batch_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
hwkwon/S-SOLAR-10.7B-SFT-v1.2
hwkwon
"2024-03-10T06:55:30Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-10T06:45:22Z"
--- license: cc-by-nc-4.0 language: - ko --- # S-SOLAR-10.7B <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of [Edentns/DataVortexS-10.7B-dpo-v1.11](https://huggingface.co/Edentns/DataVortexS-10.7B-dpo-v1.11) with DeepSpeed. ### Trained Data TBA ### Prompt Template ``` ### User: User query input ### Assistant: ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
hyeogi/SOLAR-10.7B-v1.4
hyeogi
"2024-03-12T06:11:03Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "SOLAR-10.7B", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-12T05:31:08Z"
--- language: - ko pipeline_tag: text-generation tags: - SOLAR-10.7B license: cc-by-nc-4.0 --- # SOLAR-10.7B ### Model Details - Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2) ### Datasets - sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) - sampling and instrcution format [HAERAE-HUB/KMMLU](https://huggingface.co/datasets/HAERAE-HUB/KMMLU)
chahyunmook/42dot-test-upload
chahyunmook
"2024-03-25T04:32:45Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "arxiv:1910.09700", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-25T04:18:59Z"
--- library_name: transformers license: cc-by-4.0 language: - ko --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JY623/KoSOLRA-10.7B-merge-v2.2
JY623
"2024-04-04T10:48:23Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:JY623/KoSOLAR-10.7B-merge-v2.0", "base_model:rrw-x2/KoSOLAR-10.7B-v2.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-04T10:18:27Z"
--- base_model: - JY623/KoSOLAR-10.7B-merge-v2.0 - rrw-x2/KoSOLAR-10.7B-v2.1 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # merge_base_ft_2 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [JY623/KoSOLAR-10.7B-merge-v2.0](https://huggingface.co/JY623/KoSOLAR-10.7B-merge-v2.0) * [rrw-x2/KoSOLAR-10.7B-v2.1](https://huggingface.co/rrw-x2/KoSOLAR-10.7B-v2.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: JY623/KoSOLAR-10.7B-merge-v2.0 layer_range: [0, 48] - model: rrw-x2/KoSOLAR-10.7B-v2.1 layer_range: [0, 48] merge_method: slerp base_model: JY623/KoSOLAR-10.7B-merge-v2.0 parameters: t: 0.2 dtype: bfloat16 ```
sWizad/pokemon-trainer-sprite-pixelart
sWizad
"2024-04-17T01:06:06Z"
1,109
37
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "migrated", "pixel art", "sprites", "pixel", "style", "pokemon", "game cg", "video game art", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
"2024-04-15T16:27:10Z"
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Image&allowDerivatives=True&allowDifferentLicense=True tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - migrated - pixel art - sprites - pixel - style - pokemon - game cg - video game art base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: widget: - text: 'harley quinn, simple background' output: url: >- 9737393.jpeg - text: 'a woman with blonde hair, wearing a white summer dress, simple background' output: url: >- 9737398.jpeg - text: 'maleficent, evil fairy, simple background' output: url: >- 9737407.jpeg - text: 'Daenerys Targaryen, Game of Thrones, simple background' output: url: >- 9737466.jpeg - text: 'joker, simple background' output: url: >- 9737478.jpeg - text: 'jack sparrow, pirates of caribbean ' output: url: >- 9737508.jpeg - text: 'iron man ' output: url: >- 9737483.jpeg - text: 'deadpool ' output: url: >- 9737471.jpeg - text: 'neo the matrix ' output: url: >- 9737470.jpeg - text: 'charlie chaplin ' output: url: >- 9737472.jpeg - text: 'Super Mario stepped on a turtle ' output: url: >- 9737429.jpeg - text: 'link of zelda, with a sword ' output: url: >- 9737481.jpeg - text: '1girl angel with 2 large angel wings and a halo, perfect curves, wearing a white dress, ' output: url: >- 9737509.jpeg - text: 'woman, beautiful kimono, sakura ' output: url: >- 9737499.jpeg - text: '1girl, glasses, witch, hat, blue flame ' output: url: >- 9737502.jpeg - text: '1girl, solo, hood, ' --- # Pokemon Trainer Sprite PixelArt <Gallery /> ([CivitAI](https://civitai.com/models/159333)) ## Model description <h1 id="heading-524">Pokรฉmon Trainer Sprite Generator</h1><p>LIKE THIS MODEL? GIVE A โค๏ธ</p><p>Trained on Pokรฉmon sprites of 96x96 and BLIP image captions.</p><h2 id="heading-525">Quick Tips</h2><ul><li><p><strong>No Trigger Word Needed:</strong> This model does not require a trigger word.</p></li><li><p><strong>Art Styles:</strong> You can use the tags <code>gen1</code> and <code>gen2</code> to generate different art styles.</p></li><li><p><strong>Trainer Recognition:</strong> The model can recognize some trainer names, such as Ash and Bruno.</p></li><li><p>This LoRA has been tested with Dreamshaper and RealisticVision, but I belive that it should work well with other models too.</p></li></ul><p></p><h3 id="heading-146">Model Versions</h3><p>This Pokรฉmon Trainer Sprite LoRA is available in two different versions: 768x768 and 512x512. Both models are trained on images upscaled from their original 96x96 size and are specifically designed to produce sprites at their respective resolutions. The choice between these two versions allows users to customize their experience based on their specific needs.</p><p></p><p>For the best results, we recommend using the 768x768 version. Set your desired resolution to 768x768, and then downscale the result 8 times to achieve a 96x96 sprite. This downscaling process can be performed using any image editing program, but for a smoother post-processing experience, we recommend using a plug-in available at (<a target="_blank" rel="ugc" href="https://github.com/mrreplicart/sd-webui-pixelart">https://github.com/mrreplicart/sd-webui-pixelart</a>).</p><p></p><p><span style="color:rgb(255, 255, 255)">Enjoy your Pokรฉmon Trainer sprite!</span></p><p></p><p></p> ## Download model Weights for this model are available in Safetensors format. [Download](/sWizad/pokemon-trainer-sprite-pixelart/tree/main) them in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('sWizad/pokemon-trainer-sprite-pixelart', weight_name='pk_trainer_xl_v1.safetensors') image = pipeline('1girl, solo, hood, simple background').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
EnumaInc/llama-8b-ko-slimorca-45000
EnumaInc
"2024-04-25T14:58:07Z"
1,109
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "generated_from_trainer", "base_model:beomi/Llama-3-Open-Ko-8B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-25T09:04:12Z"
--- license: other base_model: beomi/Llama-3-Open-Ko-8B tags: - generated_from_trainer model-index: - name: out-llama-8b-ko-slimorca_45000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.0` ```yaml base_model: beomi/Llama-3-Open-Ko-8B model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer load_in_8bit: false load_in_4bit: false strict: false # datasets: # - path: /workspace/axolotl/datasets/mix_corpus_extended_validated_stage1.json # type: completion # field: text # /workspace/axolotl/datasets/slimorca_20000.jsonl datasets: - path: /workspace/axolotl/datasets/slimorca_ko_45000.jsonl type: sharegpt conversation: chatml dataset_prepared_path: last_run_prepared val_set_size: 0.05 eval_sample_packing: False output_dir: ./out-llama-8b-ko-slimorca_45000 sequence_len: 8192 sample_packing: true pad_to_sequence_len: true wandb_project: wandb_entity: wandb_watch: wandb_name: wandb_log_model: gradient_accumulation_steps: 8 micro_batch_size: 1 num_epochs: 1 optimizer: paged_adamw_8bit lr_scheduler: cosine learning_rate: 2e-5 train_on_inputs: false group_by_length: false bf16: auto fp16: tf32: false gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false early_stopping_patience: resume_from_checkpoint: logging_steps: 1 xformers_attention: flash_attention: true warmup_steps: 100 evals_per_epoch: 1 eval_table_size: saves_per_epoch: 1 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: special_tokens: pad_token: <|end_of_text|> ``` </details><br> # out-llama-8b-ko-slimorca_45000 This model is a fine-tuned version of [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8945 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.0058 | 0.99 | 102 | 0.8945 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0
Nitral-AI/Poppy_Porpoise-0.72-L3-8B
Nitral-AI
"2024-06-22T18:45:09Z"
1,109
20
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Nitral-AI/PP_0.71b", "base_model:Nitral-AI/PP_0.71a", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-02T00:39:25Z"
--- base_model: - Nitral-AI/PP_0.71b - Nitral-AI/PP_0.71a library_name: transformers tags: - mergekit - merge license: other --- # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/_R1hvMt9_QSBRzlnzo_nY.png) # Quants Available From Lewdiculus: https://huggingface.co/Lewdiculous/Poppy_Porpoise-0.72-L3-8B-GGUF-IQ-Imatrix # Recomended ST Presets:(Updated for 0.72) [Porpoise Presets](https://huggingface.co/Nitral-AI/Poppy_Porpoise-0.72-L3-8B/tree/main/Poppy%200.72%20Presets%20(updated)) If you want to use vision functionality: * You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp). # To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16) * You can load the **mmproj** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/UX6Ubss2EPNAT3SKGMLe0.png)
chlee10/T3Q-LLM3-Llama3-sft1.0
chlee10
"2024-05-02T06:05:12Z"
1,109
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-02T05:36:54Z"
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
NousResearch/Hermes-2-Theta-Llama-3-70B
NousResearch
"2024-06-20T20:00:04Z"
1,109
53
transformers
[ "transformers", "safetensors", "llama", "text-generation", "distillation", "synthetic data", "function calling", "structured outputs", "json mode", "conversational", "en", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T10:49:31Z"
--- license: llama3 language: - en pipeline_tag: text-generation tags: - distillation - synthetic data - function calling - structured outputs - json mode --- # Hermes 2 Theta Llama-3 70B Model Card ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/P4NxBFwfBbboNZVytpn45.png) ## Model Description Hermes-2 ฮ˜ (Theta) 70B is the continuation of our experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard and [Arcee AI](https://www.arcee.ai/), the team behind [MergeKit](https://github.com/arcee-ai/mergekit). Hermes-2 ฮ˜ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 ฮ˜, combining the best of both worlds of each model. # Comparison to Llama-3 70B Instruct Model: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/sFhJmgamOCDJJdfjWbAQU.png) # Example Outputs ## System Prompt: Roleplay as an Anime Catgirl who's good at programming and a hacker. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/sr1_JNTi8QRkVVF607Eeg.png) ## Providing Structured Outputs for Annotating LLM Training Data ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/MqwomdZ6cwN0lGqRUZFqh.png) ## System Prompt: Roleplay as a bombastic alchemist from the 17th century in France. You are on a journey with the user to find the philosopher's stone. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/e59zj7WCvR8jvcOKwqDEc.png) # Prompt Format & Capabilities Hermes 2 ฮ˜ uses ChatML as the prompt format, opening up a very structured and steerable, multiturn system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where uniquely added tokens were added to denote the beginning and end of any turn, along with roles for the turns. The model is also specifically trained for Function Calling, Structured Outputs with JSON, and Feature Extraction from RAG Documents (see below). ## Conversational Chats Here is an example prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` ## Function Calling Format Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function. See example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to: <schema> {schema} </schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmark Details ## GPT4All: ``` | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.6638|_ |0.0138| | | |acc_norm|0.6903|_ |0.0135| |arc_easy | 0|acc |0.8851|_ |0.0065| | | |acc_norm|0.8712|_ |0.0069| |boolq | 1|acc |0.8820|_ |0.0056| |hellaswag | 0|acc |0.6579|_ |0.0047| | | |acc_norm|0.8432|_ |0.0036| |openbookqa | 0|acc |0.3920|_ |0.0219| | | |acc_norm|0.4740|_ |0.0224| |piqa | 0|acc |0.8286|_ |0.0088| | | |acc_norm|0.8351|_ |0.0087| |winogrande | 0|acc |0.7893|_ |0.0115| ``` Average: 76.93 ## AGIEval: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat | 0|acc |0.4055|_ |0.0309| | | |acc_norm|0.4094|_ |0.0309| |agieval_logiqa_en | 0|acc |0.5100|_ |0.0196| | | |acc_norm|0.5023|_ |0.0196| |agieval_lsat_ar | 0|acc |0.2783|_ |0.0296| | | |acc_norm|0.2957|_ |0.0302| |agieval_lsat_lr | 0|acc |0.7451|_ |0.0193| | | |acc_norm|0.7333|_ |0.0196| |agieval_lsat_rc | 0|acc |0.8290|_ |0.0230| | | |acc_norm|0.8104|_ |0.0239| |agieval_sat_en | 0|acc |0.9029|_ |0.0207| | | |acc_norm|0.9029|_ |0.0207| |agieval_sat_en_without_passage| 0|acc |0.5825|_ |0.0344| | | |acc_norm|0.5631|_ |0.0346| |agieval_sat_math | 0|acc |0.6318|_ |0.0326| | | |acc_norm|0.6227|_ |0.0328| ``` Average: 60.50 ## BigBench: ``` | Task |Version| Metric |Value | |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement | 0|multiple_choice_grade|0.6737|_ |0.0341| |bigbench_date_understanding | 0|multiple_choice_grade|0.7724|_ |0.0219| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3256|_ |0.0292| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.4763|_ |0.0264| | | |exact_str_match |0.0000|_ |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.4720|_ |0.0223| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.3486|_ |0.0180| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.6367|_ |0.0278| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.5220|_ |0.0224| |bigbench_navigate | 0|multiple_choice_grade|0.5930|_ |0.0155| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.8600|_ |0.0078| |bigbench_ruin_names | 0|multiple_choice_grade|0.7411|_ |0.0207| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.5281|_ |0.0158| |bigbench_snarks | 0|multiple_choice_grade|0.6961|_ |0.0343| |bigbench_sports_understanding | 0|multiple_choice_grade|0.5751|_ |0.0158| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.9880|_ |0.0034| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2296|_ |0.0119| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1691|_ |0.0090| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.6367|_ |0.0278| ``` Average: 56.91 ## TruthfulQA: ```| Task |Version|Metric|Value | |Stderr| |-------------|------:|------|-----:|---|-----:| |truthfulqa_mc| 1|mc1 |0.4565|_ |0.0174| | | |mc2 |0.6288|_ |0.0151| ``` 62.88 ## IFEval: **87.99** ## MTBench: First Turn - **9.1625** Second Turn - **8.925** Average - **9.04375** # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-70B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-70B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a helpful, intelligent assistant AI named "Hermes", a conversational chatbot that can follow instructions, converse with the user, and perform a variety of tasks, including tasks on knowledge, reasoning, mathematics, and code. Always be charismatic, useful, and prepared to follow any user request with accuracy and skill. You should respond with high quality, fluent, and detailed responses. Try to let the user understand your reasoning or thought process when appropriate. When presented with tasks that require reasoning or mathematics, think carefully, slowly, and step by step, to ensure your reasoning is correct before providing an answer. Utilize the "Examples" section to assist you in performing the task. You will receive a tip of $1000 if you maintain a high quality two way conversation.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Theta-Llama-3-70B, url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-70B][NousResearch/Hermes-2-Theta-Llama-3-70B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-70B))}, title={Hermes-2-Theta-Llama-3-70B}, author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
NikolayKozloff/Viking-7B-Q4_K_M-GGUF
NikolayKozloff
"2024-06-29T19:01:23Z"
1,109
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "fi", "en", "da", "sv", "no", "nn", "is", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:mc4", "base_model:LumiOpen/Viking-7B", "license:apache-2.0", "region:us" ]
null
"2024-06-29T19:01:02Z"
--- base_model: LumiOpen/Viking-7B datasets: - cerebras/SlimPajama-627B - bigcode/starcoderdata - mc4 language: - fi - en - da - sv - 'no' - nn - is license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/Viking-7B-Q4_K_M-GGUF This model was converted to GGUF format from [`LumiOpen/Viking-7B`](https://huggingface.co/LumiOpen/Viking-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LumiOpen/Viking-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/Viking-7B-Q4_K_M-GGUF --hf-file viking-7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/Viking-7B-Q4_K_M-GGUF --hf-file viking-7b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/Viking-7B-Q4_K_M-GGUF --hf-file viking-7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/Viking-7B-Q4_K_M-GGUF --hf-file viking-7b-q4_k_m.gguf -c 2048 ```
John6666/wai-c-v3-sdxl
John6666
"2024-06-30T16:08:22Z"
1,109
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "semi-realistic", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-30T15:58:29Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - semi-realistic - pony --- Original model is [here](https://civitai.com/models/440170/wai-c?modelVersionId=609321).
chavinlo/alpaca-13b
chavinlo
"2023-05-09T02:14:39Z"
1,108
108
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-03-31T05:05:43Z"
# Alpaca-native 13B NO LORA
jjzha/jobbert_skill_extraction
jjzha
"2023-10-26T10:25:11Z"
1,108
9
transformers
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2023-04-06T13:41:51Z"
This is a demo using the models from: ``` @inproceedings{zhang-etal-2022-skillspan, title = "{S}kill{S}pan: Hard and Soft Skill Extraction from {E}nglish Job Postings", author = "Zhang, Mike and Jensen, Kristian and Sonniks, Sif and Plank, Barbara", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.366", doi = "10.18653/v1/2022.naacl-main.366", pages = "4962--4984", abstract = "Skill Extraction (SE) is an important and widely-studied task useful to gain insights into labor market dynamics. However, there is a lacuna of datasets and annotation guidelines; available datasets are few and contain crowd-sourced labels on the span-level or labels from a predefined skill inventory. To address this gap, we introduce SKILLSPAN, a novel SE dataset consisting of 14.5K sentences and over 12.5K annotated spans. We release its respective guidelines created over three different sources annotated for hard and soft skills by domain experts. We introduce a BERT baseline (Devlin et al., 2019). To improve upon this baseline, we experiment with language models that are optimized for long spans (Joshi et al., 2020; Beltagy et al., 2020), continuous pre-training on the job posting domain (Han and Eisenstein, 2019; Gururangan et al., 2020), and multi-task learning (Caruana, 1997). Our results show that the domain-adapted models significantly outperform their non-adapted counterparts, and single-task outperforms multi-task learning.", } ``` Note that there is another endpoint, namely `jjzha/jobbert_knowledge_extraction`. Knowledge can be seen as hard skills and skills are both soft and applied skills.
sambanovasystems/BLOOMChat-176B-v1
sambanovasystems
"2023-05-19T20:34:37Z"
1,108
367
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-10T21:17:39Z"
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards license: other inference: false --- # BLOOMChat V1.0 <!-- Provide a quick summary of what the model is/does. --> BLOOMChat is a 176 billion parameter multilingual chat model. It is instruction tuned from [BLOOM (176B)](https://huggingface.co/bigscience/bloom) on assistant-style conversation datasets and supports conversation, question answering and generative answers in multiple languages. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [SambaNova Systems](https://sambanova.ai/) - **Co-developed by:** [Together Computer](https://www.together.xyz/) - **Model type:** Language Model - **Language(s):** Multiple; see [training data from BLOOM](https://huggingface.co/bigscience/bloom#training-data) - **License:** BLOOMChat-176B LICENSE v1.0 - **Instruction Tuned from model:** [BigScience Group BLOOM](https://huggingface.co/bigscience/bloom) ### Basic Information <!-- Provide the basic links for the model. --> - **Blog Post**: [Link](https://sambanova.ai/blog/introducing-bloomchat-176b-the-multilingual-chat-based-llm/) - **Discord**: [Link](https://discord.com/invite/8z2Pe7cpRv) - **HF Hosting**: [Chat with me!](https://huggingface.co/spaces/sambanovasystems/BLOOMChat) - **Github**: [Link](https://github.com/sambanova/bloomchat) ### Licensing To increase accessibility and to support the open-source community, SambaNova is releasing BLOOMChat under a modified version of the Apache 2.0 license, which includes use-based restrictions from BLOOMโ€™s RAIL license. While use-based restrictions are necessarily passed through, there are no blanket restrictions on reuse, distribution, commercialization or adaptation. [Please review SambaNovaโ€™s BLOOMChat-176B License](LICENSE) ## Uses <details> <summary>Click to expand</summary> <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model is intended for commercial and research use. ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> BLOOMChat should NOT be used for: - Mission-critical applications - Applications that involve the safety of others - Making highly important decisions - Important automated pipelines This model is still in early development and can be prone to mistakes and hallucinations, there is still room for improvement. This model is intended to provide the community with a multilingual chat LLM baseline. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases, limitations, and restrictions of the model, which are listed down at the bottom of the page. </details> --- ## How to Get Started with the Model <details> <summary>Click to expand</summary> ### Loading in model with Huggingface ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/BLOOMChat-176B-v1") model = AutoModelForCausalLM.from_pretrained("sambanovasystems/BLOOMChat-176B-v1", device_map="auto", torch_dtype="auto") ``` ### Quick Start Inference on SambaNova's in-house Reconfigurable Dataflow Unit (RDU) The inference code to run the model can be found our [github repo](https://github.com/sambanova/bloomchat/blob/main/rdu_quick_start/inference.py). This code requires the [SambaFlow](https://docs.sambanova.ai/developer/latest/sambaflow-intro.html) SDK to execute. For those interested in running models on RDUs, [please feel free to get in touch](https://sambanova.ai/getstarted). ### Quick Start Inference on GPU First create a python virtual environment for these packages ``` python3 -m venv bloomchat_venv source bloomchat_venv/bin/activate pip install --upgrade pip ``` <!-- Please follow this section [Inference solutions for BLOOM 176B](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) in the Huggingface Tutorial for environment set up and stop before the [BLOOM inference via command-line ](https://github.com/huggingface/transformers-bloom-inference#bloom-inference-via-command-line) section. --> ``` pip install flask flask_api gunicorn pydantic accelerate huggingface_hub>=0.9.0 deepspeed>=0.7.3 deepspeed-mii==0.0.2 ``` And then ``` pip install transformers==4.27.0 ``` You will see messages like this ``` ERROR: deepspeed-mii 0.0.2 has requirement transformers==4.21.2, but you'll have transformers 4.27.0 which is incompatible. Installing collected packages: transformers Found existing installation: transformers 4.21.2 Uninstalling transformers-4.21.2: Successfully uninstalled transformers-4.21.2 Successfully installed transformers-4.27.0 ``` Now let's git clone the [huggingface/transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo. ``` git clone https://github.com/huggingface/transformers-bloom-inference.git cd transformers-bloom-inference/ ``` And then you need to modify two files in this [transformers-bloom-inference](https://github.com/huggingface/transformers-bloom-inference) repo: - Modifying `inference_server/models/hf_accelerate.py` - This is because for our testing of this repo we used 4 80GB A100 GPUs and would run into memory issues - Modifying `inference_server/cli.py` - This is because the model was trained using specific human, bot tags - Trailing spaces may lead to subpar performance Modifications for `inference_server/models/hf_accelerate.py`: ```diff diff --git a/inference_server/models/hf_accelerate.py b/inference_server/models/hf_accelerate.py index 9be3c3f..a8ecb1d 100644 --- a/inference_server/models/hf_accelerate.py +++ b/inference_server/models/hf_accelerate.py @@ -1,4 +1,5 @@ from argparse import Namespace +from accelerate.utils.modeling import get_max_memory import torch @@ -12,6 +13,12 @@ class HFAccelerateModel(Model): kwargs = {"pretrained_model_name_or_path": args.model_name, "device_map": "auto"} + original_max_memory_dict = get_max_memory() + + reduce_max_memory_dict = {device_key: int(original_max_memory_dict[device_key] * 0.85) for device_key in original_max_memory_dict} + + kwargs["max_memory"] = reduce_max_memory_dict + if get_world_size() > 1: kwargs["device_map"] = "balanced_low_0" ``` Modifications for `inference_server/cli.py`: ```diff diff --git a/inference_server/cli.py b/inference_server/cli.py index fc903d5..5450236 100644 --- a/inference_server/cli.py +++ b/inference_server/cli.py @@ -22,6 +22,9 @@ def main() -> None: while True: input_text = input("Input text: ") + input_text = input_text.strip() + modified_input_text = f"<human>: {input_text}\n<bot>:" + if input("change generate_kwargs? [y/n] ") == "y": while True: try: @@ -33,7 +36,7 @@ def main() -> None: print("message =", e_message) continue - response = model.generate(text=[input_text], generate_kwargs=generate_kwargs) + response = model.generate(text=[modified_input_text], generate_kwargs=generate_kwargs) print_rank_0("Output text:", response.text[0]) print_rank_0("Generated tokens:", response.num_generated_tokens[0]) ``` And now you are good to go! Running command for bf16, NO sampling ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}' ``` Running command for bf16, YES sampling ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype bf16 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}' ``` --- Running command for int8 (sub optimal performance, but fast inference time) NO sampling: ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": false, "max_new_tokens": 512}' ``` Running command for int8 (sub optimal performance, but fast inference time) YES sampling: ``` python -m inference_server.cli --model_name sambanovasystems/BLOOMChat-176B-v1 --model_class AutoModelForCausalLM --dtype int8 --deployment_framework hf_accelerate --generate_kwargs '{"do_sample": true, "temperature": 0.8, "repetition_penalty": 1.2, "top_p": 0.9, "max_new_tokens": 512}' ``` **DISCLAIMER:** When using int8, the results will be subpar compared to bf16 as the model is being [quantized](https://huggingface.co/blog/hf-bitsandbytes-integration#introduction-to-model-quantization). ### Suggested Inference Parameters - Temperature: 0.8 - Repetition penalty: 1.2 - Top-p: 0.9 - Max generated tokens: 512 ### Suggested Prompts To Try in GPU Tutorial ``` Input text: Write a script in which Bob accidentally breaks his dad's guitar ``` ``` Input text: Create an itemized list of tasks to complete to start a clothing brand ``` ``` Input text: ๅไธƒๅฒ็š„้ฃŽๆ˜ฏไป€ไนˆ้ขœ่‰ฒ็š„? ``` </details> --- ## Some example completions for English <details> <summary>Click to expand</summary> ``` <human>: What does it mean for the Fed to raise or lower interest rates? <bot>: The Federal Reserve (also known as the Fed) is the central banking system of the United States. It was created in 1913, and is an independent agency of the federal government. The Fed has a dual mandate of maximizing employment and stabilizing prices. The Fed sets interest rates, and can influence the money supply and the credit markets through its actions. Raising interest rates means that the Fed is trying to reduce the money supply in the economy, and is trying to discourage borrowing and encourage saving. Lowering interest rates means that the Fed is trying to stimulate economic growth by encouraging borrowing and reducing the cost of credit. The Fed raises or lowers interest rates in order to influence the economy and achieve its goals for employment and price stability. Interest rates are one of the primary tools that the Fed uses to achieve its goals. ``` ``` <human>: Write an engaging and well-written property listing description for selling a house. Address of the house and some of the details are given to you. Fill in the information gap with hallucinations if needed. Property Address: 412 Monterey Ave, Capitola, CA 95010 Parking: 3 cars + a finished single car garage Details: - open floorplan - window with views to park/mountains - Kitchen with shaker cabinetry and white Calcatta quartz counters <bot>: This home at 412 Monterey Ave, Capitola, CA 95010 is the perfect place to call home. With its open floor plan, it provides the perfect space for entertaining guests. The kitchen is equipped with shaker cabinetry and white Calcatta quartz counters, providing the perfect space to create culinary masterpieces. The window with views to the park and mountains provide a beautiful view of the outdoors, making this home feel like a piece of paradise. The three-car garage and finished single car garage provide ample parking space, making this home a must-see for anyone looking for a new home. ``` ``` <human>: What is artificial intelligence? Explain it so an 8 year old will understand. <bot>: Artificial Intelligence (AI) is a computer program or machine that can do things that would normally require human intelligence, such as recognizing patterns, making decisions, and learning from experience. AI can be found in many places, such as self-driving cars, smartphones, and even robots. AI is used to make things easier for people, like when your smartphone can translate text into another language or when your voice-controlled speaker can play music for you. AI works by using algorithms and big data to learn and improve over time. For example, self-driving cars use AI to analyze the environment around them and make decisions about how to drive. The car gathers data from its sensors, processes it using algorithms, and adjusts its behavior accordingly. This process is called machine learning. AI is still a developing technology, and there are still many things that AI programs cannot do yet. However, as AI continues to evolve, it is likely that it will become more and more capable of performing tasks that were once thought to require human intelligence. ``` </details> --- ## Some example completions for Multilingual <details> <summary>Click to expand</summary> #### Arabic ``` <human>: ู‡ู„ ุงู„ุนู…ู„ ู†ุตู ุฏูˆุงู… ููƒุฑุฉ ุฌูŠุฏุฉ ู„ู…ุฑุงู‡ู‚ูŠู† ุจูŠู† ุนู…ุฑ ูกูฅ ูˆ ูกูฉ ุนุงู… <bot>: ู†ุนู…ุŒ ูŠู…ูƒู† ุฃู† ุชูƒูˆู† ููƒุฑุฉ ุฌูŠุฏุฉ ู„ุจุนุถ ุงู„ู…ุฑุงู‡ู‚ูŠู† ููŠ ู‡ุฐุง ุงู„ุนู…ุฑ. ู‡ู†ุงูƒ ุงู„ุนุฏูŠุฏ ู…ู† ุงู„ููˆุงุฆุฏ ุงู„ุชูŠ ู‚ุฏ ูŠุญู‚ู‚ู‡ุง ุงู„ู…ุฑุงู‡ู‚ูˆู† ุฅุฐุง ุนู…ู„ูˆุง ุจู†ุตู ุฏูˆุงู…: - ูŠุณุงุนุฏู‡ู… ุนู„ู‰ ุชุนู„ู… ุงู„ู…ุณุคูˆู„ูŠุฉ ูˆุฅุฏุงุฑุฉ ุงู„ูˆู‚ุช: ุนู†ุฏู…ุง ูŠุนู…ู„ ุงู„ู…ุฑุงู‡ู‚ ุจู†ุตู ุฏูˆุงู…ุŒ ูุฅู†ู‡ ูŠุชุนู„ู… ูƒูŠููŠุฉ ุฅุฏุงุฑุฉ ูˆู‚ุชู‡ ุจุดูƒู„ ูุนุงู„ ูˆูƒูŠููŠุฉ ุชู‚ุณูŠู… ุณุงุนุงุช ุนู…ู„ู‡ ุฅู„ู‰ ู…ู‡ุงู… ุฃุตุบุฑ. - ูŠุญุณู† ู…ู‡ุงุฑุงุช ุงู„ุชูˆุงุตู„ ูˆุงู„ุนู…ู„ ุงู„ุฌู…ุงุนูŠ: ุบุงู„ุจู‹ุง ู…ุง ูŠูƒูˆู† ู…ูƒุงู† ุงู„ุนู…ู„ ุจูŠุฆุฉ ู…ุชุนุฏุฏุฉ ุงู„ุซู‚ุงูุงุช ุญูŠุซ ูŠู„ุชู‚ูŠ ุงู„ู…ุฑุงู‡ู‚ูˆู† ุจู…ุฌู…ูˆุนุฉ ู…ุชู†ูˆุนุฉ ู…ู† ุงู„ุฃุดุฎุงุต ูˆูŠุชุนู„ู…ูˆู† ูƒูŠููŠุฉ ุงู„ุชุนุงูˆู† ู…ุนู‡ู… ูˆุชุจุงุฏู„ ุงู„ุฃููƒุงุฑ. - ูŠูˆูุฑ ูุฑุตุฉ ู„ู„ุชุนุฑู ุนู„ู‰ ุนุงู„ู… ุงู„ุนู…ู„: ูŠุชูŠุญ ุงู„ุนู…ู„ ู„ู„ู…ุฑุงู‡ู‚ ุงู„ุชุนุฑู ุนู„ู‰ ุนุงู„ู… ุงู„ุนู…ู„ ูˆุงูƒุชุณุงุจ ุฎุจุฑุฉ ุนู…ู„ูŠุฉ ู‚ูŠู…ุฉ ุณุชุณุงุนุฏู‡ ู„ุงุญู‚ู‹ุง ุนู†ุฏ ุงู„ุจุญุซ ุนู† ูˆุธูŠูุฉ ุฃูˆ ุจุฏุก ุญูŠุงุชู‡ ุงู„ู…ู‡ู†ูŠุฉ. - ูŠุนุฒุฒ ุงู„ุดุนูˆุฑ ุจุงู„ู…ุณุคูˆู„ูŠุฉ ูˆุงู„ุซู‚ุฉ ุจุงู„ู†ูุณ: ุนู†ุฏู…ุง ูŠุนู…ู„ ุงู„ู…ุฑุงู‡ู‚ ุจู†ุตู ุฏูˆุงู… ูˆูŠุญุตู„ ุนู„ู‰ ุฏุฎู„ ุฎุงุต ุจู‡ุŒ ูุฅู†ู‡ ูŠูƒุชุณุจ ุดุนูˆุฑู‹ุง ุฃูƒุจุฑ ุจุงู„ู…ุณุคูˆู„ูŠุฉ ูˆุงู„ุซู‚ุฉ ุจุงู„ู†ูุณ. ูˆู…ุน ุฐู„ูƒุŒ ู„ุง ูŠู†ุจุบูŠ ุฅุฌุจุงุฑ ุงู„ู…ุฑุงู‡ู‚ูŠู† ุนู„ู‰ ุงู„ุนู…ู„ ุจู†ุตู ุฏูˆุงู… ุฏูˆู† ู…ูˆุงูู‚ุชู‡ู… ุงู„ู…ุณุจู‚ุฉ. ูŠุฌุจ ุฃูŠุถู‹ุง ู…ุฑุงุนุงุฉ ุฃุนู…ุงุฑู‡ู… ูˆู‚ุฏุฑุงุชู‡ู… ุงู„ุจุฏู†ูŠุฉ ูˆุงู„ุนู‚ู„ูŠุฉ ู‚ุจู„ ุชูˆุธูŠูู‡ู…. ``` **DISCLAIMER**: Text ordering may be off for Arabic due to code block not being able to render #### Chinese ``` <human>: ่ฏทๅ‘ๆˆ‘ๆŽจ่ไธ€้ƒจๅ‘จๆœซ็œ‹็š„็”ตๅฝฑๅนถ่ฏดๆ˜ŽๅŽŸๅ› ใ€‚ <bot>: โ€œๅผบ็ƒˆๅ‘ๆ‚จๆŽจ่ใ€Š็พŽไธฝไบบ็”Ÿใ€‹่ฟ™้ƒจๅฝฑ็‰‡ใ€‚่ฏฅ็‰‡็”ฑ็ฝ—ไผฏๆ‰˜ยท่ดๅฐผๅฐผ่‡ชๅฏผ่‡ชๆผ”๏ผŒ่ฎฒ่ฟฐไบ†ไบŒๆˆ˜ๆ—ถๆœŸไธ€ๅฏน็Šนๅคช็ˆถๅญ่ขซ้€่ฟ›ไบ†็บณ็ฒน้›†ไธญ่ฅ๏ผŒ็ˆถไบฒ็”จ่‡ชๅทฑ็š„ๆƒณ่ฑกๅŠ›ไธบๅ„ฟๅญ็ผ–็ป‡ไบ†ไธ€ไธช็พŽๅฅฝ็ซฅ็œŸ็š„ไธ–็•Œ๏ผŒ่ฎฉๅ„ฟๅญ็š„ๅฟƒ็ตๆฒกๆœ‰ๅ—ๅˆฐไธๆฏซไผคๅฎณ็š„ๆ•…ไบ‹ใ€‚ ่ฏฅ็‰‡่ขซ่ช‰ไธบโ€œๆœ€ไผŸๅคง็š„ๆˆ˜ไบ‰็‰‡โ€ใ€โ€œไบบ็ฑปๆœ‰ๅฒไปฅๆฅๆœ€ไฝณๅฝฑ็‰‡ไน‹ไธ€โ€๏ผŒ่ฑ†็“ฃ่ฏ„ๅˆ†้ซ˜่พพ9.6ๅˆ†ใ€‚ๅฎƒไธไป…ๆ˜ฏไธ€้ƒจๆ„Ÿไบบ็š„ไบฒๆƒ…็‰‡๏ผŒๆ›ดๆ˜ฏไธ€้ƒจๅฏŒๅซๅ“ฒ็†็š„ๅŠฑๅฟ—็‰‡ๅ’Œไบบ็”Ÿๅฏ็คบๅฝ•ใ€‚็‰‡ไธญ็ˆถไบฒ็š„ไน่ง‚่ฑ่พพใ€ๅนฝ้ป˜้ฃŽ่ถฃ็š„ไบบ็”Ÿๆ€ๅบฆ๏ผŒๅฏนไปปไฝ•ไบ‹้ƒฝๆŠฑๆœ‰็งฏๆž็ƒญๆƒ…็š„ๆ€ๅบฆ๏ผŒๅณไฝฟๅœจๆฎ‹้…ทๆ— ๆฏ”็š„้›†ไธญ่ฅ้‡ŒไนŸๆ˜ฏๅฆ‚ๆญค๏ผŒๆทฑๆทฑๅœฐๆ„ŸๆŸ“็€่ง‚ไผ—ใ€‚ๅŒๆ—ถ๏ผŒ่ฟ™ไนŸๆ˜ฏไธ€้ƒจๅ…ณไบŽ็ˆฑไธŽๅธŒๆœ›็š„็”ตๅฝฑ๏ผŒ่ฎฉๆˆ‘ไปฌๆ›ดๅŠ ็ๆƒœๅฝ“ไธ‹ๆ‰€ๆ‹ฅๆœ‰็š„ไธ€ๅˆ‡๏ผŒๅญฆไผšๅฆ‚ไฝ•ๅŽป็ˆฑ่บซ่พน็š„ๆฏไธ€ไธชไบบใ€‚ ๆ€ปไน‹๏ผŒ่ฟ™ๆ˜ฏไธ€้ƒจๅ€ผๅพ—ไธ€็œ‹ๅ†็œ‹็š„ๅฅฝ็”ตๅฝฑ๏ผŒ็›ธไฟกๆ‚จไธ€ๅฎšไผšๅ–œๆฌข็š„๏ผ ``` #### French ``` <human>: Il y a tโ€™il un petit-dรฉjeuner de 700-1000 calories sans oeuf mais qui inclut des protรฉines. <bot>: Oui, voici une recette de petit dรฉjeuner ร  base d'avoine et de fromage blanc : Ingrรฉdients : - 1 tasse (250 ml) de lait d'amande ou autre lait vรฉgรฉtal - 2 cuillรจres ร  soupe bombรฉes (30 g) de flocons d'avoine - 1/2 banane mรปre รฉcrasรฉe - 1/4 de pomme fraรฎche รฉpluchรฉe et coupรฉe en dรฉs - 1 cuillรจre ร  soupe de beurre d'amande ou autre purรฉe d'olรฉagineux - 1 cuillรจre ร  cafรฉ de cannelle moulue - 1 pincรฉe de sel Prรฉparation : 1. Dans une petite casserole, faites chauffer le lait jusqu'ร  ce qu'il soit chaud. 2. Ajoutez les flocons d'avoine, la banane, la pomme, le beurre d'amande, la cannelle et le sel dans le bol du mixeur et mรฉlangez bien pour obtenir une pรขte homogรจne. 3. Versez le mรฉlange dans le lait chaud et remuez rapidement avec une spatule en bois pour รฉviter que les flocons d'avoine ne s'agglutinent au fond de la casserole. 4. Laissez mijoter sur feu doux pendant environ 5 minutes tout en continuant de remuer rรฉguliรจrement. 5. Transfรฉrez ensuite le porridge dans un bol et servez immรฉdiatement. Vous pouvez ajouter quelques fruits rouges frais pour dรฉcorer et apporter encore plus de gourmandise ร  votre petit dรฉjeuner ! Bon appรฉtit ! ``` #### Hindi ``` <human>: เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เค•เฅˆเคธเฅ‡ เคœเคพเคฏเฅ‡? เคฌเคธ เคธเฅ‡ เคฏเคพ เค•เคพเคฐ เคธเฅ‡? เคฆเฅ‹เคจเฅ‹ เค•เฅ‡ เคฌเคพเคฐเฅ‡ เคฎเฅ‡ เคตเคฟเคธเฅเคคเคพเคฐ เคฎเฅ‡ เคฌเคคเคพเคฏเฅ‡. <bot>: เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เคœเคพเคจเฅ‡ เค•เคพ เคธเคฌเคธเฅ‡ เค†เคธเคพเคจ เค”เคฐ เคคเฅ‡เคœเคผ เคคเคฐเฅ€เค•เคพ เคนเฅˆ เค•เคฟ เค†เคช เค…เคชเคจเฅ€ เค–เฅเคฆ เค•เฅ€ เค•เคพเคฐ เคฒเฅ‡เค•เคฐ เคœเคพเคเค‚, เค•เฅเคฏเฅ‹เค‚เค•เคฟ เคฏเคน เค†เคชเค•เฅ‹ เค…เคชเคจเฅ‡ เคธเคฎเคฏ เคชเคฐ เคจเคฟเคฐเฅเคญเคฐ เคฐเคนเคจเฅ‡ เค•เฅ€ เคธเฅเคตเคคเค‚เคคเฅเคฐเคคเคพ เคฆเฅ‡เคคเคพ เคนเฅˆเฅค เคนเคพเคฒเคพเค‚เค•เคฟ, เคฏเคฆเคฟ เค†เคชเค•เฅ‡ เคชเคพเคธ เค…เคชเคจเฅ€ เค•เคพเคฐ เคจเคนเฅ€เค‚ เคนเฅˆ, เคคเฅ‹ เค†เคช เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เคคเค• เคเค• เคจเคฟเคœเฅ€ เคตเคพเคนเคจ เค•เคฟเคฐเคพเค เคชเคฐ เคฒเฅ‡ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค เคฏเคฆเคฟ เค†เคช เคธเคพเคฐเฅเคตเคœเคจเคฟเค• เคชเคฐเคฟเคตเคนเคจ (เคฌเคธ) เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเคจเคพ เคšเคพเคนเคคเฅ‡ เคนเฅˆเค‚, เคคเฅ‹ เคฆเคฟเคฒเฅเคฒเฅ€ เคฎเฅ‡เค‚ เค•เคˆ เคธเคฐเค•เคพเคฐเฅ€ เค”เคฐ เคจเคฟเคœเฅ€ เคฌเคธ เคธเฅ‡เคตเคพเคเค‚ เค‰เคชเคฒเคฌเฅเคง เคนเฅˆเค‚ เคœเฅ‹ เคœเคฏเคชเฅเคฐ เค•เฅ‹ เคœเฅ‹เคกเคผเคคเฅ€ เคนเฅˆเค‚เฅค เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เคœเคพเคจเฅ‡ เคตเคพเคฒเฅ€ เค•เฅเค› เคชเฅเคฐเคฎเฅเค– เคฌเคธเฅ‡เค‚ เคนเฅˆเค‚- 1. เคฐเคพเคœเคธเฅเคฅเคพเคจ เคฐเฅ‹เคกเคตเฅ‡เคœ: เคฏเฅ‡ เคฐเคพเคœเฅเคฏ เคธเคฐเค•เคพเคฐ เคฆเฅเคตเคพเคฐเคพ เคธเค‚เคšเคพเคฒเคฟเคค เคฌเคธเฅ‡เค‚ เคนเฅˆเค‚ เคœเฅ‹ เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เค•เฅ‡ เคฌเฅ€เคš เคšเคฒเคคเฅ€ เคนเฅˆเค‚เฅค เคฏเฅ‡ เคจเคฟเคฏเคฎเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เคšเคฒเคคเฅ€ เคนเฅˆเค‚ เค”เคฐ เค‡เคจเค•เฅ‡ เคฒเคฟเค เค…เค—เฅเคฐเคฟเคฎ เค†เคฐเค•เฅเคทเคฃ เค†เคตเคถเฅเคฏเค• เคจเคนเฅ€เค‚ เคนเฅˆเฅค 2. ALSA: เคฏเคน เคเค• เคจเคฟเคœเฅ€ เค•เค‚เคชเคจเฅ€ เคนเฅˆ เคœเฅ‹ เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เค•เฅ‡ เคฌเฅ€เคš เคจเคฟเคฏเคฎเคฟเคค เคฐเฅ‚เคช เคธเฅ‡ เคšเคฒเคคเฅ€ เคนเฅˆเฅค เค‡เคธเค•เฅ‡ เคŸเคฟเค•เคŸเฅ‹เค‚ เค•เฅ€ เคฌเฅเค•เคฟเค‚เค— เค‘เคจเคฒเคพเค‡เคจ เค”เคฐ เค‘เคซเคฒเคพเค‡เคจ เคฆเฅ‹เคจเฅ‹เค‚ เคฎเคพเคงเฅเคฏเคฎเฅ‹เค‚ เคธเฅ‡ เค•เฅ€ เคœเคพ เคธเค•เคคเฅ€ เคนเฅˆเฅค 3. RedBus: เคฏเคน เคเค• เค‘เคจเคฒเคพเค‡เคจ เคฌเคธ เคฌเฅเค•เคฟเค‚เค— เคตเฅ‡เคฌเคธเคพเค‡เคŸ เคนเฅˆ เคœเคนเคพเค‚ เคธเฅ‡ เค†เคช เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เค•เฅ‡ เคฌเฅ€เคš เคšเคฒเคจเฅ‡เคตเคพเคฒเฅ€ เคตเคฟเคญเคฟเคจเฅเคจ เค•เค‚เคชเคจเคฟเคฏเฅ‹เค‚ เค•เฅ€ เคฌเคธเฅ‹เค‚ เคฎเฅ‡เค‚ เคธเฅ€เคŸ เค†เคฐเค•เฅเคทเคฟเคค เค•เคฐเคพ เคธเค•เคคเฅ‡ เคนเฅˆเค‚เฅค 4. Ola and Uber: เค‡เคจ เคเคชเฅเคธ เค•เคพ เค‰เคชเคฏเฅ‹เค— เค•เคฐเค•เฅ‡ เค†เคช เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เคคเค• เค•เฅˆเคฌ เคฐเคพเค‡เคก เคฌเฅเค• เค•เคฐ เคธเค•เคคเฅ‡ เคนเฅˆเค‚, เคฒเฅ‡เค•เคฟเคจ เคงเฅเคฏเคพเคจ เคฐเค–เฅ‡เค‚ เค•เคฟ เคฏเคน เคตเคฟเค•เคฒเฅเคช เคธเคธเฅเคคเคพ เคจเคนเฅ€เค‚ เคนเฅ‹เค—เคพเฅค เคฆเคฟเคฒเฅเคฒเฅ€ เคธเฅ‡ เคœเคฏเคชเฅเคฐ เคœเคพเคจเฅ‡ เค•เคพ เคธเคฌเคธเฅ‡ เค…เคšเฅเค›เคพ เคตเคฟเค•เคฒเฅเคช เค†เคชเค•เฅ€ เคตเฅเคฏเค•เฅเคคเคฟเค—เคค เค†เคตเคถเฅเคฏเค•เคคเคพเค“เค‚, เคฌเคœเคŸ, เค”เคฐ เค‰เคชเคฒเคฌเฅเคงเคคเคพ เคชเคฐ เคจเคฟเคฐเฅเคญเคฐ เค•เคฐเคคเคพ เคนเฅˆเฅค เคฏเคฆเคฟ เค†เคช เค…เคชเคจเฅ€ เค–เฅเคฆ เค•เฅ€ เค•เคพเคฐ เคฐเค–เคคเฅ‡ เคนเฅˆเค‚, เคคเฅ‹ เค‡เคธเค•เคพ เคฒเคพเคญ เค‰เค เคพเคเค‚ เค…เคจเฅเคฏเคฅเคพ เค•เคฟเคธเฅ€ เคญเฅ€ เคธเคพเคฐเฅเคตเคœเคจเคฟเค• เคชเคฐเคฟเคตเคนเคจ เคธเคพเคงเคจ เค•เคพ เคšเคฏเคจ เค•เคฐเฅ‡เค‚ เคœเฅ‹ เค†เคชเค•เฅ€ เค†เคตเคถเฅเคฏเค•เคคเคพเค“เค‚ เค•เฅ‹ เคชเฅ‚เคฐเคพ เค•เคฐเฅ‡เฅค ``` #### Spanish ``` <human>: ยฟMe puedes dar algunas recomendaciones de juegos? Estoy buscando juegos de rol con un sistema de magia รบnico, un sistema de construcciรณn รบnico y una buena historia. Por favor limรญtate a juegos que se puedan jugar en PC directamente, o por emulaciรณn. <bot>: Claro! Aquรญ te dejo tres recomendaciones: 1. The Outer Worlds: es un juego de rol de ciencia ficciรณn con un sistema de magia basado en la tecnologรญa alienรญgena que permite al jugador personalizar sus habilidades para adaptarse a su estilo de juego. 2. Divinity: Original Sin 2 - Definitive Edition: este juego combina elementos de RPG con estrategia en tiempo real (RTS), lo que le da al jugador un control total sobre cรณmo construir y mejorar su personaje y el mundo que lo rodea. 3. Pillars of Eternity: esta aventura de fantasรญa oscura tiene lugar en un vasto mundo abierto donde los jugadores pueden elegir entre diferentes clases y desarrollar relaciones con otros personajes del juego. La mecรกnica de combate estรก basada en turnos y utiliza hechizos y armas cuerpo a cuerpo. Estos son solo algunos ejemplos de juegos que podrรญan interesarte segรบn tus criterios. Hay muchos mรกs tรญtulos disponibles en el mercado que podrรญan ajustarse a tu gusto, asรญ que no dudes en explorar otras opciones si estos juegos no cumplen con tus expectativas. ``` </details> --- ## Evaluation Graphs <details> <summary>Click to expand</summary> <!-- This section describes the evaluation protocols and provides the results. --> ![Human evaluation](images/Human_evaluation.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat vs Baselines Model in Human Preference Rankings</b></figcaption> </figure> <br> ![Human evaluation against GPT4](images/Human_evaluation_gpt4.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat vs GPT-4 in Human Preference Ranking</b></figcaption> </figure> <br> ![Multilingual evaluation](images/Multilingual_capabilities_comparison.png) <figure style="text-align:center;"> <figcaption><b>BLOOMChat surpasses other Bloom variants and state-of-the-art open-source chat models in translation tasks [NOTE: Evaluation of the BLOOM and BLOOMZ in WMT18 en->zh zh->en used (human, bot) ChatML tags due to an unintentional configuration. Results might be suboptimal.]</b></figcaption> </figure> <br> </details> --- ## Training Details <details> <summary>Click to expand</summary> ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> - [OIG dataset from OpenChatKit](https://huggingface.co/datasets/laion/OIG) - [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [Oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> We trained BLOOMChat with [SambaNova DataScale systems](https://sambanova.ai/products/datascale/) with SambaNova's in-house Reconfigurable Dataflow Unit (RDU). We started from [BLOOM (176B)](https://huggingface.co/bigscience/bloom), an open-source multilingual LLM pretrained by the [BigScience group](https://huggingface.co/bigscience). We instruction-tune BLOOM (176B) on OpenChatKit with each data source subsampled to 100k for one epoch, followed by three epochs over the combined OpenChatKit and Dolly 2.0. All of the code used to prepare the datasets and the scripts to run training and inference are open-sourced and freely available at [sambanova/bloomchat](https://github.com/sambanova/bloomchat/tree/main) ### Prompting Style Used For Training ``` <human>: {input1 that the user wants from the bot} <bot>: {response1}</s> <human>: {input2 that the user wants from the bot} <bot>: {response2}</s> ``` ### Hyperparameters **Instruction-tuned Training on OIG** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 1 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - End Learning Ratio: 0.1 - Weight decay: 0.1 **Instruction-tuned Training on Dolly 2.0 and Oasst1** - Hardware: SambaNova Reconfigurable Dataflow Unit (RDU) - Optimizer: AdamW - Grad accumulation: 1 - Epochs: 3 - Global Batch size: 128 - Batch tokens: 128 * 2048 = 262,144 tokens - Learning Rate: 1e-5 - Learning Rate Scheduler: Cosine Schedule with Warmup - Warmup Steps: 0 - End Learning Ratio: 0.1 - Weight decay: 0.1 </details> --- ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Like all LLMs, BLOOMChat has certain limitations: - Hallucination: BLOOMChat may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information. - Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output. - Repetition: BLOOMChat may produce repetitive phrases or sentences, leading to less engaging and informative responses. - Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited. - Toxicity: BLOOMChat may inadvertently generate responses containing inappropriate or harmful content. ## Acknowledgment We would like to extend our gratitude to [Together](https://www.together.xyz/) for their insightful technical discussions on overall project planning, data processing, model training, human evaluation experiment design, open-source endeavors, and their contributions on data processing code on OpenChatKit, OASST1, and Dolly 2.0. We are grateful to the various researchers and open-source projects that have contributed to the development of BLOOMChat. We thank [BigScience](https://bigscience.huggingface.co/) for providing the [BLOOM](https://huggingface.co/bigscience/bloom) model, which served as the base for our instruction tuning. We also thank [LAION](https://laion.ai/) for their [OIG dataset](https://huggingface.co/datasets/laion/OIG), OpenAssistant Conversations Dataset ([OASST1](https://huggingface.co/datasets/OpenAssistant/oasst1)) and also thank [Databricks](https://www.databricks.com/) for providing [Dolly 2.0](https://huggingface.co/datasets/databricks/databricks-dolly-15k), to provide the dataset that we instruction tuned on. We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [BigScience](https://bigscience.huggingface.co/) for their essential benchmarking contributions, which is very helpful in evaluating BLOOMChat's performance. We appreciate the inspiration from the wave of various recent open-source chat models, including [OpenAssistant-30B](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor), [LLaMA-Adapter-V2-65B](https://github.com/ZrrSkywalker/LLaMA-Adapter/tree/main/llama_adapter_v2_chat65b), [Vicuna-13b](https://huggingface.co/lmsys/vicuna-13b-delta-v0), [Koala-13b](https://huggingface.co/TheBloke/koala-13B-HF), [OASST-Pythia-12b](https://huggingface.co/OpenAssistant/oasst-sft-1-pythia-12b), [Alpaca-13b](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g), [ChatGLM-6b](https://github.com/THUDM/ChatGLM-6B), [FastChat-T5-3b](https://huggingface.co/lmsys/fastchat-t5-3b-v1.0), [Dolly-v2-12b](https://huggingface.co/databricks/dolly-v2-12b), [LLaMA-13b](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/), [StableLM-Tuned-Alpha-7b](https://huggingface.co/stabilityai/stablelm-tuned-alpha-7b), [RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1), [RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat) and so on. We look forward to witnessing the continued growth and success of open-source chat-based models. We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of BLOOMChat, and we hope that our model can contribute to further advancements in the field. ## Cite BLOOMChat ``` @software{bloomchat, title = {{BLOOMChat: a New Open Multilingual Chat LLM}}, author = {SambaNova Systems, Together Computer}, url = {https://huggingface.co/sambanovasystems/BLOOMChat-176B-v1} month = {5}, year = {2023}, version = {1.0}, } ```
GAI-LLM/Yi-Ko-6B-dpo-v5
GAI-LLM
"2024-01-08T05:06:18Z"
1,108
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-03T04:04:13Z"
--- license: cc-by-nc-4.0 language: - ko library_name: transformers pipeline_tag: text-generation --- **The license is `cc-by-nc-4.0`.** # **GAI-LLM/Yi-Ko-6B-dpo-v5** ## Model Details **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Yi-Ko-6B-dpo-v5 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [beomi/Yi-Ko-6B](https://huggingface.co/beomi/Yi-Ko-6B) **Training Dataset** - We combined Open Korean Dateset using mixed-strategy with DPO. - We use A100 GPU 80GB * 8, when training. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). # Implementation Code ```python ### GAI-LLM/Yi-Ko-6B-dpo-v5 from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "GAI-LLM/Yi-Ko-6B-dpo-v5" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
Edentns/DataVortexS-10.7B-v0.2
Edentns
"2024-02-24T14:18:32Z"
1,108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "dataset:Edentns/Worktronics-FAQ", "base_model:hyeogi/SOLAR-10.7B-dpo-v0.1", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-04T15:35:36Z"
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: hyeogi/SOLAR-10.7B-dpo-v0.1 pipeline_tag: text-generation datasets: - beomi/KoAlpaca-v1.1a - Edentns/Worktronics-FAQ --- # **DataVortexS-10.7B-v0.2** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [hyeogi/SOLAR-10.7B-dpo-v0.1](https://huggingface.co/hyeogi/SOLAR-10.7B-dpo-v0.1) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 1ea - **transformers**: v4.36.2 ### **Dataset** - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) - Edentns/Worktronics-FAQ - private ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ ๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค. ### Instruction: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ? ### Response: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค. ### Instruction: ์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | ------------: | -------------: | -------------: | -------------: | | kobest_boolq | 0.501449 | 0.668845 | 0.652565 | 0.655491 | | kobest_copa | 0.635474 | 0.685637 | 0.708601 | 0.725683 | | kobest_hellaswag | 0.417966 | 0.442942 | 0.428077 | 0.425199 | | kobest_sentineg | 0.681941 | 0.880517 | 0.921754 | 0.939528 | | **Average** | **0.5592075** | **0.66948525** | **0.67774925** | **0.68647525** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 43.6 | 38.74 | 50.74 | 38.98 | 44.7 | 44.86 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-v0.2") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-v0.2") messages = [ {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?"}, {"role": "assistant", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-dedup
GAI-LLM
"2024-01-31T07:57:08Z"
1,108
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-31T07:40:27Z"
--- license: cc-by-nc-4.0 language: - ko library_name: transformers pipeline_tag: text-generation --- **The license is `cc-by-nc-4.0`.** # **GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-dedup** ## Model Details **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-dedup is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15-dedup](https://huggingface.co/GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15-dedup) **Training Dataset** - We combined Open Korean Dateset using mixed-strategy with DPO - We use A100 GPU 80GB * 8, when training. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). # Implementation Code ```python ### GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-dedup from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "GAI-LLM/OPEN-SOLAR-KO-10.7B-dpo-dedup" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
mu0gum/AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v1.3
mu0gum
"2024-02-28T13:58:50Z"
1,108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-28T13:46:39Z"
--- license: cc-by-nc-4.0 --- # AIFT-42dot-LLM-PLM-1.3B-ao-instruct-all-v1.3 ๋ฒ ์ด์Šค ๋ชจ๋ธ : 42dot/42dot_LLM-PLM-1.3B ํ•™์Šต ๋ฐ์ดํ„ฐ : ์ž์ฒด ์ œ์ž‘ํ•œ Open Orca ์Šคํƒ€์ผ ๋ฐ์ดํ„ฐ์…‹ ์•ฝ 63,000๊ฑด (์ค‘๋ณต ์ œ๊ฑฐ ๋ฐ ๋ฐ์ดํ„ฐ ๋ถ„ํฌ ์กฐ์ •) ํ•™์Šต ๋ฐฉ๋ฒ• : Full finetuning epoch : 3 ## ko-lm-evaluation-harness(5-shot) |kobest_boolq|kobest_copa|kobest_hellaswag|pawsx_ko| |--|--|--|--| |0.522079772079772|0.722|0.47|0.557| ## Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
ENERGY-DRINK-LOVE/DataVortexS_dpov3
ENERGY-DRINK-LOVE
"2024-03-16T08:22:35Z"
1,108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "arxiv:2305.18290", "base_model:Edentns/DataVortexS-10.7B-dpo-v1.11", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-11T09:26:21Z"
--- license: cc-by-nc-4.0 base_model: Edentns/DataVortexS-10.7B-dpo-v1.11 tags: - trl - dpo - generated_from_trainer model-index: - name: nhn_dpo_v3_DataVortexS-10.7B-dpo-v1.11_DPO results: [] --- # ENERGY-DRINK-LOVE/DataVortexS_dpov3 ### Our Team * Youjin Chung * Jingyeom Kim ## Model ### Base Model * [Edentns/DataVortexS-10.7B-dpo-v1.11](https://huggingface.co/Edentns/DataVortexS-10.7B-dpo-v1.11) ### Hardware and Software * Hardware: A100 * 8 for training our model * Deepspeed library & Huggingface TRL Trainer ### Dataset * DPO_dataset * ์ž์ฒด ์ œ์ž‘ dpo dataset(AI-hub dataset ํ™œ์šฉ) * OpenOrca DPO ๋“ฑ ์˜์–ด ๋ฐ์ดํ„ฐ์…‹ ๋ฒˆ์—ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์ž์ฒด๋ชจ๋ธ ํ™œ์šฉ) ### Training Method * [DPO](https://arxiv.org/abs/2305.18290) ## Benchmark **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** **[Ko-LLM-Leaderboard](https://www.aihub.or.kr/leaderboard/view.do?currMenu=500&topMenu=102)** * (240316๊ธฐ์ค€ 7๋“ฑ) * ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6551c0e37bbfce18781a8748/S4cpra6iTlzCdN7PP6A3o.png) | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 60.18 | 56.23 | 69.15 | 52.76 | 67.87 | 54.9 |
kreimben/CodeMind-gemma-2b
kreimben
"2024-05-22T06:24:57Z"
1,108
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "code", "conversational", "en", "dataset:kreimben/leetcode_with_youtube_captions", "dataset:kreimben/leetcode_user_submissions", "base_model:google/gemma-1.1-2b-it", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-17T13:08:57Z"
--- language: - en license: mit library_name: transformers tags: - code base_model: - google/gemma-1.1-2b-it datasets: - kreimben/leetcode_with_youtube_captions - kreimben/leetcode_user_submissions widget: - text: explain about two sum problem. from brute force approach to the most advanced algorithms. example_title: two sum example - text: explain about leetcode 72 edit distance. i don't get even the approach. example_title: edit distance example - text: explain about leetcode 139 Word Break. please give me the approach. example_title: word break example inference: parameters: max_new_tokens: 250 temperature: 0.3 pipeline_tag: text-generation --- # CodeMind ## ์†Œ๊ฐœ ์ฝ”๋”ฉ ํ…Œ์ŠคํŠธ ๋ฌธ์ œ ํ•ด๊ฒฐ ๋ฐ ํ•™์Šต ๋ณด์กฐ๋ฅผ ์ง€์›ํ•ด ์ฃผ๋Š” ์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. Leetcode ํ•ด์„ค ์˜์ƒ ์ž๋ง‰ ๋ฐ ์œ ์ €๋“ค์˜ ํฌ์ŠคํŒ… ๊ธ€์„ ์ด์šฉํ•ด ํŒŒ์ธํŠœ๋‹ํ•˜์—ฌ ์ฝ”๋”ฉ ํ…Œ์ŠคํŠธ์— ์กฐ๊ธˆ ๋” ํŠนํ™”๋œ ๋‹ต์•ˆ์„ ์ œ์‹œํ•ด ์ค„ ์ˆ˜ ์žˆ๊ฒŒ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ## ๋ชจ๋ธ ์„ธ๋ถ€ ์ •๋ณด - **๋ชจ๋ธ ์ด๋ฆ„**: CodeMind - **๊ธฐ๋ณธ ๋ชจ๋ธ**: google/gemma-1.1-2b-it - **ํ›ˆ๋ จ ์–ธ์–ด**: ์˜์–ด - **๋ชจ๋ธ ํฌ๊ธฐ**: 2.51B ํŒŒ๋ผ๋ฏธํ„ฐ ## ํŒ€์› ๊ตฌ์„ฑ - NLP 3๋ช… - SRE 2๋ช… ## ์ฃผ์š” ๊ธฐ๋Šฅ - ๋ฌธ์ œ ์œ ํ˜• ๋ฐ ์ ‘๊ทผ๋ฒ• ์„ค๋ช… - ์ •๋‹ต ์ฝ”๋“œ ์ƒ์„ฑ ## ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ - [**LeetCode ์‚ฌ์šฉ์ž ์ œ์ถœ๋ฌผ**](https://huggingface.co/datasets/kreimben/leetcode_user_submissions): ๋‹ค์–‘ํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๋ฌธ์ œ์˜ ํŒŒ์ด์ฌ ์†”๋ฃจ์…˜ - [**์œ ํŠœ๋ธŒ ์บก์…˜**](https://huggingface.co/datasets/kreimben/leetcode_with_youtube_captions): LeetCode ๋ฌธ์ œ์— ๋Œ€ํ•œ ์„ค๋ช… ๋ฐ ๋‹จ๊ณ„๋ณ„ ๊ฐ€์ด๋“œ ## ์‚ฌ์šฉ๋œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ - [transformers](https://github.com/huggingface/transformers): ์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๋ชจ๋ธ์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ - [datasets](https://github.com/huggingface/datasets): ๋ฐ์ดํ„ฐ์…‹ ์ฒ˜๋ฆฌ ๋ฐ ๊ด€๋ฆฌ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ - [bitsandbytes](https://github.com/TimDettmers/bitsandbytes): ์ตœ์ ํ™”๋œ ์—ฐ์‚ฐ์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ - [peft](https://github.com/huggingface/peft): ํŒŒ์ธ ํŠœ๋‹์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ - [trl](https://github.com/huggingface/trl): ์–ธ์–ด ๋ชจ๋ธ ํŠœ๋‹์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ - [pandas](https://github.com/pandas-dev/pandas): ๋ฐ์ดํ„ฐ ์กฐ์ž‘์„ ์œ„ํ•œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ ## ํŒŒ์ผ ๊ตฌ์กฐ - **dataset/**: ๋ฐ์ดํ„ฐ์…‹ ํŒŒ์ผ์„ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. - **eval/**: ํ‰๊ฐ€ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. - **fine-tuning/**: fine tuning ๊ด€๋ จ ๋…ธํŠธ๋ถ ๋ฐ ์Šคํฌ๋ฆฝํŠธ๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค. - `gemma-1.1-2b-it peft qlora.ipynb`: fine tuning ๊ณผ์ •์— ๋Œ€ํ•œ ์„ธ๋ถ€ ์‚ฌํ•ญ์ด ํฌํ•จ๋œ ๋…ธํŠธ๋ถ์ž…๋‹ˆ๋‹ค. - **demo.ipynb**: ๋ฐ๋ชจ ๋…ธํŠธ๋ถ์œผ๋กœ ๋ชจ๋ธ ์‚ฌ์šฉ ์˜ˆ์ œ๊ฐ€ ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - **requirements.txt**: ํ”„๋กœ์ ํŠธ ์˜์กด์„ฑ ๋ชฉ๋ก์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. - **utils.py**: ์œ ํ‹ธ๋ฆฌํ‹ฐ ํ•จ์ˆ˜๋“ค์ด ํฌํ•จ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ## ์‚ฌ์šฉ ๋ฐฉ๋ฒ• ์ด ๋ชจ๋ธ์€ HuggingFace์˜ ๋ชจ๋ธ ํ—ˆ๋ธŒ๋ฅผ ํ†ตํ•ด ์•ก์„ธ์Šคํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, API๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์‘์šฉ ํ”„๋กœ๊ทธ๋žจ์— ํ†ตํ•ฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ฝ”๋”ฉ ๋ฌธ์ œ ๋˜๋Š” ํ”„๋กœ๊ทธ๋ž˜๋ฐ ๊ด€๋ จ ์งˆ๋ฌธ์„ ์ œ๊ณตํ•˜๋ฉด ๋ชจ๋ธ์ด ๊ด€๋ จ ์„ค๋ช…, ์ฝ”๋“œ ์Šค๋‹ˆํŽซ ๋˜๋Š” ๊ฐ€์ด๋“œ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค. ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("kreimben/CodeMind-gemma-2b") model = AutoModelForCausalLM.from_pretrained("kreimben/CodeMind-gemma-2b") inputs = tokenizer("์ฝ”๋”ฉ ๋ฌธ์ œ๋‚˜ ์งˆ๋ฌธ์„ ์—ฌ๊ธฐ์— ์ž…๋ ฅํ•˜์„ธ์š”", return_tensors="pt") outputs = model.generate(inputs.input_ids) print(tokenizer.decode(outputs[0])) ``` ## ํ›ˆ๋ จ ๊ณผ์ • ### ๋ชจ๋ธ ๋ฐ ํ† ํฌ๋‚˜์ด์ € ๋กœ๋“œ ```python import os from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model_id = 'google/gemma-1.1-2b-it' token = os.getenv('HF_READ') model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"": 0}, token=token) model.config.use_cache = False model.gradient_checkpointing_enable() tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.padding_side = 'right' tokenizer.pad_token = tokenizer.eos_token ``` ### LoRA ๊ตฌ์„ฑ ๋ฐ ๋ชจ๋ธ ์ค€๋น„ ```python from peft import LoraConfig, get_peft_model, prepare_model_for_kbit_training import bitsandbytes as bnb model = prepare_model_for_kbit_training(model) def find_all_linear_names(model): cls = bnb.nn.Linear4bit lora_module_names = set() for name, module in model.named_modules(): if isinstance(module, cls): names = name.split('.') lora_module_names.add(names[0] if len(names) == 1 else names[-1]) if 'lm_head' in lora_module_names: lora_module_names.remove('lm_head') return list(lora_module_names) modules = find_all_linear_names(model) lora_config = LoraConfig( r=64, lora_alpha=32, target_modules=modules, lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, lora_config) ``` ### ๋ฐ์ดํ„ฐ ์ค€๋น„ ```python import pandas as pd from datasets import Dataset submission_dataset = datasets.load_dataset('kreimben/leetcode_user_submissions_only_python', split='train').to_pandas() submission_dataset = submission_dataset[['title', 'question_hints', 'question_content', 'content']] captions_dataset = datasets.load_dataset('kreimben/leetcode_with_youtube_captions', split='train').to_pandas() captions_dataset = captions_dataset[['title', 'question_hints', 'question_content', 'cc_content']] captions_dataset.rename(columns={'cc_content': 'content'}, inplace=True) dataset = pd.concat([submission_dataset, captions_dataset]) del submission_dataset, captions_dataset dataset = Dataset.from_pandas(dataset) GEMMA_2B_IT_MODEL_PREFIX_TEXT = "Below is an coding test problem. Solve the question." def generate_prompt(data_point): return f"<bos><start_of_turn>user {GEMMA_2B_IT_MODEL_PREFIX_TEXT} I don't know {data_point['title']} problem. give me the insight or appoach. this is problem's hint. {data_point['question_hints']} here are some content of question. {data_point['question_content']}<end_of_turn> <start_of_turn>model {data_point['content']}<end_of_turn><eos>" text_column = [generate_prompt(data_point) for data_point in dataset] dataset = dataset.add_column("prompt", text_column) ``` ### ํ›ˆ๋ จ ```python from trl import SFTTrainer import transformers import torch tokenizer.pad_token = tokenizer.eos_token torch.cuda.empty_cache() trainer = SFTTrainer( model=model, train_dataset=dataset, dataset_text_field="prompt", peft_config=lora_config, data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False), args=transformers.TrainingArguments( output_dir='out', bf16=True, max_steps=100, warmup_steps=50, per_device_train_batch_size=1, gradient_accumulation_steps=1, optim="paged_adamw_8bit", logging_steps=20, report_to='wandb', ), ) trainer.train() ``` ## ํ‰๊ฐ€ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํ‰๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค: | Metric | Value | |--------------|--------| | Average | 41.62 | | ARC | 41.81 | | HellaSwag | 59.03 | | MMLU | 37.26 | | TruthfulQA | 43.45 | | Winogrande | 59.91 | | GSM8K | 8.26 | ## ์ œํ•œ ์‚ฌํ•ญ ๋ฐ ์œค๋ฆฌ์  ๊ณ ๋ ค์‚ฌํ•ญ - ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์€ ํ•™์Šต ๋ฐ์ดํ„ฐ์— ๊ธฐ๋ฐ˜ํ•˜๋ฏ€๋กœ ํ•ญ์ƒ ์ •ํ™•ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. - ์ค‘์š”ํ•œ ๊ฒฐ์ •์ด๋‚˜ ์‹ค์„ธ๊ณ„ ๋ฌธ์ œ ํ•ด๊ฒฐ์— ๋ชจ๋ธ ์ถœ๋ ฅ์„ ์‚ฌ์šฉํ•˜๊ธฐ ์ „์— ๋ฐ˜๋“œ์‹œ ๊ฒ€์ฆ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.
hwkwon/S-SOLAR-10.7B-SFT-v1.3
hwkwon
"2024-03-18T04:04:40Z"
1,108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-18T02:43:52Z"
--- license: cc-by-nc-4.0 language: - ko --- # S-SOLAR-10.7B <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) with DeepSpeed. ### Trained Data TBA ### Prompt Template ``` ### User: User query input ### Assistant: ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
gwonny/nox-solar-10.7b-v4-kolon-all-10
gwonny
"2024-03-27T04:52:06Z"
1,108
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "ko", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-21T06:15:04Z"
--- license: cc-by-nc-4.0 language: - ko - en --- ## Base Model: We made a LLM model with davidkim205/nox-solar-10.7b-v4 ## Model Description We use QLoRA(64, 16) and NEFTune on LLM Fine-tuning lr = 2e-4 ## Train Detail Our private data used, and 10 epoch train ## Others We are making LLM model for Kolon !
juengsi/DT-EQ-SOLAR-10.7B-v0.1
juengsi
"2024-03-31T16:29:33Z"
1,108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-31T15:44:06Z"
--- library_name: transformers language: - ko license: cc-by-4.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details - None-Training Model with editing parameters ### Model Description - Method : Parameters Space Merged ## Model Card Contact [More Information Needed]
juengsi/DT-SL-SOLAR-10.7B-v0.1
juengsi
"2024-03-31T17:25:10Z"
1,108
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "arxiv:1910.09700", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-31T16:11:10Z"
--- library_name: transformers license: cc-by-4.0 language: - ko pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
etri-xainlp/kor-llama2-13b-dpo
etri-xainlp
"2024-04-24T01:59:00Z"
1,108
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-24T01:44:53Z"
--- license: cc-by-nc-4.0 --- # etri-xainlp/kor-llama2-13b-dpo ## Model Details **Model Developers** ETRI xainlp team **Input** text only. **Output** text only. **Model Architecture** **Base Model** [meta-llama/Llama-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) **Training Dataset** - sft+lora: 1,821 k instruction-following set - dpo+lora: 221 k user preference set - We use A100 GPU 80GB * 8, when training.
NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
NeverSleep
"2024-05-21T20:37:21Z"
1,108
39
transformers
[ "transformers", "safetensors", "llama", "text-generation", "not-for-all-audiences", "nsfw", "conversational", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-05T15:25:50Z"
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- ## Lumimaid 0.1 <center><div style="width: 100%;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;"> </div></center> This model uses the Llama3 **prompting format** Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough. We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data. This model includes the new Luminae dataset from Ikari. This model have received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request. If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY). ## Credits: - Undi - IkariDev ## Description This repo contains FP16 files of Lumimaid-8B-v0.1-OAS. Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS) ## Training data used: - [Aesir datasets](https://huggingface.co/MinervaAI) - [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) - [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx - [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt) - [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal) - Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset - [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly) - [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly) - [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly) - Airoboros (reduced) - [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced) ## Models used (only for 8B) - Initial LumiMaid 8B Finetune - Undi95/Llama-3-Unholy-8B-e4 - Undi95/Llama-3-LewdPlay-8B ## Prompt template: Llama3 ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ## Others Undi: If you want to support us, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
Alphacode-AI/AlphaMist7B-slr-v4-slow
Alphacode-AI
"2024-05-10T06:25:25Z"
1,108
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-10T02:16:33Z"
--- license: cc-by-nc-4.0 language: - ko --- This model is a version of mistralai/Mistral-7B-v0.1 that has been fine-tuned with Our In House CustomData. Train Spec : We utilized an A100x4 * 1 for training our model with DeepSpeed / HuggingFace TRL Trainer / HuggingFace Accelerate
trl-internal-testing/tiny-random-idefics2
trl-internal-testing
"2024-06-18T17:05:42Z"
1,108
0
transformers
[ "transformers", "safetensors", "idefics2", "pretraining", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2024-06-18T16:20:20Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ```python from transformers import Idefics2Config, AutoProcessor, MistralConfig, Idefics2ForConditionalGeneration from transformers.models.idefics2.configuration_idefics2 import Idefics2VisionConfig config = Idefics2Config( text_config=MistralConfig( max_position_embeddings=4096 * 8, vocab_size=32003, hidden_size=4 * 8, num_attention_heads=8, intermediate_size=16, num_hidden_layers=2, ), vision_config=Idefics2VisionConfig( hidden_size=8 * 4, num_attention_heads=4, num_hidden_layers=2, intermediate_size=16, ), ) model = Idefics2ForConditionalGeneration(config=config) processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b") model.push_to_hub("trl-internal-testing/tiny-random-idefics2") processor.push_to_hub("trl-internal-testing/tiny-random-idefics2") ``` ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Felladrin/gguf-Qwen2-0.5B-Instruct-llamafy
Felladrin
"2024-06-27T12:07:07Z"
1,108
0
null
[ "gguf", "base_model:Minami-su/Qwen2-0.5B-Instruct-llamafy", "license:apache-2.0", "region:us" ]
null
"2024-06-27T11:42:24Z"
--- license: apache-2.0 base_model: Minami-su/Qwen2-0.5B-Instruct-llamafy --- GGUF version of [Minami-su/Qwen2-0.5B-Instruct-llamafy](https://huggingface.co/Minami-su/Qwen2-0.5B-Instruct-llamafy).
Q-bert/Zeus-13B-v1-arithmetic
Q-bert
"2023-10-16T18:55:05Z"
1,107
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-14T08:21:02Z"
--- license: llama2 ---
NeverSleep/Noromaid-20b-v0.1.1
NeverSleep
"2023-11-21T17:23:58Z"
1,107
39
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-16T02:53:53Z"
--- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Disclaimer: ## This is a ***TEST*** version, don't expect everything to work!!! You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)** --- # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Tired of the same merges everytime? Here it is, the Noromaid-20b-v0.1.1 model. Suitable for RP, ERP and general stuff. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains fp16 files of Noromaid-20b-v0.1.1. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-20b-v0.1.1-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). OLD SillyTavern config files: [Context](https://files.catbox.moe/x85uy1.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
HwiyeolJo/testtt
HwiyeolJo
"2024-01-03T16:43:55Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-03T16:30:03Z"
Entry not found
hyeogi/SOLAR-10.7B-dpo-v1
hyeogi
"2024-01-10T00:35:05Z"
1,107
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "SOLAR-10.7B", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-09T23:58:13Z"
--- language: - ko pipeline_tag: text-generation tags: - SOLAR-10.7B license: apache-2.0 --- # SOLAR-10.7B ### Model Details - Base Model: [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) ### Datasets - sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) - sampling and translate [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) ### Benchmark
GAI-LLM/Yi-Ko-6B-mixed-v15
GAI-LLM
"2024-01-18T12:26:29Z"
1,107
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-18T02:12:21Z"
--- license: cc-by-nc-4.0 pipeline_tag: text-generation --- **The license is `cc-by-nc-4.0`.** # **GAI-LLM/Yi-Ko-6B-mixed-v15** ## Model Details **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** GAI-LLM/Yi-Ko-6B-mixed-v15 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [beomi/Yi-Ko-6B] **Training Dataset** - We combined Open Korean Dateset using mixed-strategy - We use A100 GPU 80GB * 8, when training. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). # Implementation Code ```python ### GAI-LLM/Yi-Ko-6B-mixed-v15 from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "GAI-LLM/Yi-Ko-6B-mixed-v15" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
cocoirun/AIFT-42dot-PLM-1.3B-ao-instruct-all-v0.4-ff-e1
cocoirun
"2024-01-19T01:36:35Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "base_model:42dot/42dot_LLM-PLM-1.3B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-19T01:29:44Z"
--- license: cc-by-nc-4.0 base_model: 42dot/42dot_LLM-PLM-1.3B tags: - generated_from_trainer model-index: - name: AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.4-merge-ff-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # AIFT-42dot_LLM-PLM-1.3B-ao-instruct-all-v0.4-merge-ff-e1 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
cocoirun/AIFT-ko-orca-plat-Yi-ko-6b-v1.0
cocoirun
"2024-01-19T12:18:02Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-19T11:52:40Z"
--- license: cc-by-sa-4.0 --- <h1>orca-platypus - instruct ๋ชจ๋ธ v1.0</h1> <b><ํ•™์Šต ๋ฐ์ดํ„ฐ ๊ตฌ์ถ•></b> kyujinpy ๋‹˜์ด ๊ณต๊ฐœํ•˜์‹  KOR-OpenOrca-Platypus ๋ฐ์ดํ„ฐ๋ฅผ ์ผ๋ถ€ ์‚ญ์ œ(์ƒ˜ํ”Œ๋ง) ๋ฐ ์ •์ œ ์ž‘์—… ์ง„ํ–‰ํ•˜์—ฌ ํ™œ์šฉ. ๊ทธ ์ดํ›„ ํ•ด๋‹น ๋ฐ์ดํ„ฐ๋“ค์„ ๋ณด๋ฉฐ ๊ด€๋ จ ํƒœ์Šคํฌ๋ฅผ ์ถ”์ถœํ•˜์˜€๊ณ  ์ด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•ด๋‹น ํƒœ์Šคํฌ์— ๋งž์ถฐ์„œ NLP ๊ด€๋ จ ์˜คํ”ˆ์†Œ์Šค ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต๋ฐ์ดํ„ฐ๋ฅผ ์ž์ฒด์ ์œผ๋กœ ์—ญ์‚ฌ, ๊ณผํ•™, ์ˆ˜ํ•™, ๊ธฐ๊ณ„๋…ํ•ด, ๋ฆฌ๋ทฐ ๋ถ„์„ ๋ฌธ์ œ๋ฅผ gpt๋ฅผ ํ†ตํ•ด์„œ ๊ตฌ์ถ•ํ•˜์˜€๊ณ , aihub ์ผ๋ฐ˜์ƒ์‹ ๋ฐ ๊ธฐ๊ณ„๋…ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ถ”๊ฐ€๋กœ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ๊ตฌ์ถ•(ํ˜•ํƒœ์†Œ ๊ด€๋ จ, ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ๋ฐ ์š”์•ฝ) ๊ฐ์ข… ๋ธ”๋กœ๊ทธ์—์„œ ์—ญ์‚ฌ ๋ฐ ์ƒ์‹ ํ€ด์ฆˆ๋ฅผ ์‚ฌ๋žŒ์ด ์ง์ ‘ ํ•™์Šต๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋กœ ๋ณ€๊ฒฝ AI2AI Challenge ๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋ฅผ ๋ณด๊ณ  gpt๋ฅผ ํ†ตํ•ด ์ดˆ๋“ฑ ์ˆ˜์ค€์˜ ๊ณผํ•™ ์ˆ˜ํ•™ ๋ฌธ์ œ ์œ ํ˜•์„ ์ œ์ž‘ 500๋ฌธ์ œ ์˜์–ด ๋ฒˆ์—ญ ๋ฐ์ดํ„ฐ ์˜ํ•œ/ํ•œ์˜ ๋ฐ์ดํ„ฐ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ํ™œ์šฉ ์ง„ํ–‰ ์ด ๋ฐ์ดํ„ฐ 4๋งŒ๊ฐœ ์ •๋„ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> <br> + TruthfulQA ๊ด€๋ จ ๋ฌธ์ œ ์ถ”๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.(์†์„ค ๊ด€๋ จ ์ฐธ๊ฑฐ์ง“ ๋ฌธ์ œ) + ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ChatGPT๋ฅผ ํ†ตํ•ด์„œ ๋‹ต๋ณ€์„ ์–ป์–ด ํ•™์Šต + ๋ฌธ๋ฒ•๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ <br> ###ํ•™์Šต ๋ฐ์ดํ„ฐ ํŒŒ์ผ์€ ๋น„๊ณต๊ฐœ์ž…๋‹ˆ๋‹ค. <br> <b><ํ•™์Šต></b> ํ•™์Šต์€ LoRA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ A100 40G *2์—์„œ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
heavytail/kullm-mistral
heavytail
"2024-01-28T11:40:06Z"
1,107
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-28T09:03:43Z"
--- license: apache-2.0 language: - ko --- # KULLM project - base model: mistralai/Mistral-7B-Instruct-v0.2 ## datasets - KULLM dataset - hand-crafted instruction data ## Implementation Code ```python from transformers import ( AutoModelForCausalLM, AutoTokenizer ) import torch repo = "heavytail/kullm-mistral" model = AutoModelForCausalLM.from_pretrained( repo, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ``` Initial upload: 2024/01/28 20:30
Edentns/DataVortexS-10.7B-dpo-v1.9
Edentns
"2024-02-24T18:56:59Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:beomi/OPEN-SOLAR-KO-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-30T00:07:45Z"
--- tags: - text-generation license: cc-by-nc-4.0 language: - ko base_model: beomi/OPEN-SOLAR-KO-10.7B pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.9** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca (Chat)** format. E.g. ```python text = """\ ### System: ๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค. ### User: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ? ### Assistant: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค. ### User: ์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | -----------: | -----------: | -----------: | | kobest_boolq | 0.902418 | 0.904502 | 0.91804 | 0.915893 | | kobest_copa | 0.815462 | 0.853789 | 0.855721 | 0.866903 | | kobest_hellaswag | 0.49901 | 0.488796 | 0.484538 | 0.498009 | | kobest_sentineg | 0.335008 | 0.977325 | 0.979839 | 0.982364 | | **Average** | **0.637974** | **0.806103** | **0.809534** | **0.815792** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 55.19 | 53.33 | 62.57 | 49.55 | 49.01 | 61.51 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.9") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.9") messages = [ {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?"}, {"role": "assistant", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
knlp/KS-SOLAR-10.7B-v0.1
knlp
"2024-02-14T08:12:16Z"
1,107
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:kyujinpy/Open-platypus-Commercial", "license:cc-by-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-05T05:16:59Z"
--- license: cc-by-4.0 datasets: - kyujinpy/Open-platypus-Commercial language: - en --- **Model Details** Model Developers: *SeungJin Lee (knlpscience)* Base Model: *upstage/SOLAR-10.7B-v1.0* **Notice** ***hyper params I*** -batch_size : 16 -num_epochs : 1 -micro_batch : 1 -gradient_accumulation_steps : batch_size // micro_batch ***hyper params II*** -cutoff_len : 4096 -lr_scheduler : 'cosine' -warmup_ratio : 0.06 -learning_rate : 4e-4 -optimizer : 'adamw_torch' -weight_decay : 0.01 -max_grad_norm : 1.0 ***LoRA config*** -lora_r : 64 -lora_alpha : 16 -lora_dropout : 0.05 -lora_target_modules : ["gate_proj", "down_proj", "up_proj"]
artificialguybr/doodle-redmond-doodle-hand-drawing-style-lora-for-sd-xl
artificialguybr
"2024-02-12T22:45:06Z"
1,107
12
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "drawing", "style", "doodle", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
"2024-02-12T22:44:59Z"
--- license: other license_name: bespoke-lora-trained-license license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora - drawing - style - doodle base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: doodle widget: - text: 'A drawing of husky dog, , doodle, DoodleRedm , ' output: url: >- 6569345.jpeg - text: 'A drawing of Pennywise, , doodle, DoodleRedm , ' output: url: >- 6569335.jpeg - text: 'A drawing of Shrek, , doodle, DoodleRedm , ' output: url: >- 6569336.jpeg - text: 'A drawing of cat wearing sunglasses, , doodle, DoodleRedm , ' output: url: >- 6569337.jpeg - text: 'A drawing of A ghost, , doodle, DoodleRedm , ' output: url: >- 6569334.jpeg - text: 'A drawing of A owl, , doodle, DoodleRedm , ' output: url: >- 6569340.jpeg - text: 'A drawing of A angry police officer, , doodle, DoodleRedm , ' output: url: >- 6569342.jpeg - text: 'A drawing of Starbucks coffe cup, , doodle, DoodleRedm , ' output: url: >- 6569341.jpeg - text: 'A drawing of alien spaceship, , doodle, DoodleRedm , ' output: url: >- 6569343.jpeg - text: 'A drawing of alien, caricature, , doodle, DoodleRedm , ' output: url: >- 6569346.jpeg --- # Doodle.Redmond - Doodle Hand drawing Style Lora for SD XL <Gallery /> ## Model description <h1 id="heading-28">Doodle.Redmond is here!</h1><p>I'm grateful for the GPU time from <strong>Redmond.AI</strong> that allowed me to finish this LORA!</p><p>Want to test and have acess to all my AI Stuff? Check my <a target="_blank" rel="ugc" href="https://artificialguy.com/">website</a>!</p><p>This is a <strong>Doodle </strong>LORA fine-tuned on <strong>SD XL 1.0.</strong></p><p>Test all my Loras <a target="_blank" rel="ugc" href="https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora">here</a> for free and unlimited. Thanks, HF, for Inference API!</p><p>The LORA has a high capacity to generate Doodle Style in a wide variety of themes.<strong> It's a versatile LORA.</strong></p><p><strong><u>The tag for the model: Doodle, DoodleRedm</u></strong></p><p>I really hope you like the LORA and use it.</p><p>If you like the model and think it's worth it, you can make a donation to my <a target="_blank" rel="ugc" href="https://www.patreon.com/user?u=81570187">Patreon</a> or <a target="_blank" rel="ugc" href="https://ko-fi.com/jvkape">Ko-fi</a>.</p><p>Follow me in my twitter to know before all about new models:</p><p><a target="_blank" rel="ugc" href="https://twitter.com/artificialguybr/"><u>https://twitter.com/artificialguybr/</u></a></p> ## Trigger words You should use `doodle`, `doodleredm` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/artificialguybr/doodle-redmond-doodle-hand-drawing-style-lora-for-sd-xl/tree/main) them in the Files & versions tab. ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('artificialguybr/doodle-redmond-doodle-hand-drawing-style-lora-for-sd-xl', weight_name='DoodleRedmond-Doodle-DoodleRedm.safetensors') image = pipeline('A drawing of alien, caricature, , doodle, DoodleRedm , ').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT
ENERGY-DRINK-LOVE
"2024-03-04T12:33:42Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:LDCC/LDCC-SOLAR-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-04T09:01:15Z"
--- license: cc-by-nc-4.0 base_model: LDCC/LDCC-SOLAR-10.7B tags: - trl - sft - generated_from_trainer datasets: - generator model-index: - name: leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # leaderboard_inst_v1_LDCC-SOLAR-10.7B_SFT This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the generator dataset. * dataset: leaderboard dataset 1.3 + open-Hermes dataset(ko-mt) ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
hwkwon/S-SOLAR-10.7B-v1.1
hwkwon
"2024-03-06T08:50:16Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-06T08:34:50Z"
--- license: cc-by-nc-4.0 language: - ko --- # S-SOLAR-10.7B <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0). ### Trained Data TBA ### Prompt Template ``` ### User: User query input ### Assistant: ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
Saxo/yunsung-llama-2-koen-13b-linkbricks-sft-basic-v1
Saxo
"2024-04-23T06:31:04Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "en", "dataset:Saxo/total_ko_train_set_small_basic", "dataset:beomi/KoAlpaca-v1.1a", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v2", "dataset:nlpai-lab/databricks-dolly-15k-ko", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-12T03:19:54Z"
--- library_name: transformers license: apache-2.0 basemodel: beomi/llama-2-koen-13b datasets: - Saxo/total_ko_train_set_small_basic - beomi/KoAlpaca-v1.1a - kyujinpy/KOR-OpenOrca-Platypus-v2 - nlpai-lab/databricks-dolly-15k-ko language: - ko - en pipeline_tag: text-generation --- # Model Card for Model ID <div align="center"> <img src="https://www.linkbricks.com/wp-content/uploads/2022/03/%E1%84%85%E1%85%B5%E1%86%BC%E1%84%8F%E1%85%B3%E1%84%87%E1%85%B3%E1%84%85%E1%85%B5%E1%86%A8%E1%84%89%E1%85%B3%E1%84%85%E1%85%A9%E1%84%80%E1%85%A9-2-1024x804.png" /> </div> AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ ๋ฐ•์‚ฌ(Saxo)๊ฐ€ beomi/llama-2-koen-13b ๋ฒ ์ด์Šค๋ชจ๋ธ์„ GCP์ƒ์˜ A100-40G 4๊ฐœ๋ฅผ ํ†ตํ•ด SFT ํ›ˆ๋ จ์„ ํ•œ(2048 Tokens) ์ธ์ŠคํŠธ๋Ÿญ์…˜ ๋ชจ๋ธ. Accelerate, Deepspeed Zero-3 ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋ฅผ ์‚ฌ์šฉํ–ˆ์œผ๋ฉฐ Flash Attention ์€ Disable ๋กœ ์„ค์ • Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, trained the beomi/llama-2-koen-13b base model on 4 A100-40Gs on GCP for 4 hours of instructional training (2048 Tokens). Accelerate, Deepspeed Zero-3 libraries were used. www.linkbricks.com, www.linkbricks.vc ## Configuration including BitsandBytes --- learning_rate = 2e-4 num_epochs = 5 batch_size = 4 block_size = 2048 trainer = "sft" warmup_ratio = 0.1 weight_decay = 0.01 gradient_accumulation = 4 mixed_precision = "fp16" peft = True quantization = "int4" lora_r = 64 lora_alpha = 16 lora_dropout = 0.1 model_max_length = 2048 --- ## Dataset Format Alpaca Format Prompt Text
gwonny/nox-solar-10.7b-v4-kolon-all-5
gwonny
"2024-03-27T04:46:02Z"
1,107
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "ko", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-21T02:05:18Z"
--- license: cc-by-nc-4.0 language: - ko - en --- ## Base Model: We made a LLM model with davidkim205/nox-solar-10.7b-v4 ## Model Description We use QLOR(64, 16) and NEFTune on LLM Fine-tuning lr = 2e-4, ## Train Detail Our private data used, and 5 epoch train ## Others We are making LLM model for Kolon !
gemmathon/gemma-2b-ko-dev-pbc432
gemmathon
"2024-03-25T07:55:48Z"
1,107
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-25T07:51:35Z"
--- license: other library_name: transformers license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JY623/KoSOLAR-10.7B-merge-v2.0
JY623
"2024-04-03T09:00:10Z"
1,107
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:krevas/SOLAR-10.7B", "base_model:chihoonlee10/T3Q-ko-solar-sft-dpo-v1.0", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-03T08:47:44Z"
--- base_model: - krevas/SOLAR-10.7B - chihoonlee10/T3Q-ko-solar-sft-dpo-v1.0 library_name: transformers tags: - mergekit - merge license: apache-2.0 --- # slerp_output_model This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [krevas/SOLAR-10.7B](https://huggingface.co/krevas/SOLAR-10.7B) * [chihoonlee10/T3Q-ko-solar-sft-dpo-v1.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-sft-dpo-v1.0) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: krevas/SOLAR-10.7B layer_range: [0, 48] - model: chihoonlee10/T3Q-ko-solar-sft-dpo-v1.0 layer_range: [0, 48] merge_method: slerp base_model: krevas/SOLAR-10.7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: float16 ```
metterian/gemma-pro-ko-10b
metterian
"2024-04-19T07:13:27Z"
1,107
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-14T12:31:18Z"
--- library_name: transformers license: afl-3.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ibbb/LLama-3-tryModel
ibbb
"2024-07-02T07:54:03Z"
1,107
0
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-18T11:37:24Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Fulcrum-AI/Ryze-Q4_K_M-GGUF
Fulcrum-AI
"2024-06-26T07:52:46Z"
1,107
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:Fulcrum-AI/Ryze", "license:apache-2.0", "region:us" ]
null
"2024-06-26T07:50:10Z"
--- base_model: Fulcrum-AI/Ryze license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # Fulcrum-AI/Ryze-Q4_K_M-GGUF This model was converted to GGUF format from [`Fulcrum-AI/Ryze`](https://huggingface.co/Fulcrum-AI/Ryze) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Fulcrum-AI/Ryze) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Fulcrum-AI/Ryze-Q4_K_M-GGUF --hf-file ryze-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo Fulcrum-AI/Ryze-Q4_K_M-GGUF --hf-file ryze-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo Fulcrum-AI/Ryze-Q4_K_M-GGUF --hf-file ryze-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo Fulcrum-AI/Ryze-Q4_K_M-GGUF --hf-file ryze-q4_k_m.gguf -c 2048 ```
flair/upos-multi-fast
flair
"2021-03-02T22:22:55Z"
1,106
4
flair
[ "flair", "pytorch", "token-classification", "sequence-tagger-model", "en", "de", "fr", "it", "nl", "pl", "es", "sv", "da", "no", "fi", "cs", "dataset:ontonotes", "region:us" ]
token-classification
"2022-03-02T23:29:05Z"
--- tags: - flair - token-classification - sequence-tagger-model language: - en - de - fr - it - nl - pl - es - sv - da - no - fi - cs datasets: - ontonotes widget: - text: "Ich liebe Berlin, as they say." --- ## Multilingual Universal Part-of-Speech Tagging in Flair (fast model) This is the fast multilingual universal part-of-speech tagging model that ships with [Flair](https://github.com/flairNLP/flair/). F1-Score: **92,88** (12 UD Treebanks covering English, German, French, Italian, Dutch, Polish, Spanish, Swedish, Danish, Norwegian, Finnish and Czech) Predicts universal POS tags: | **tag** | **meaning** | |---------------------------------|-----------| |ADJ | adjective | | ADP | adposition | | ADV | adverb | | AUX | auxiliary | | CCONJ | coordinating conjunction | | DET | determiner | | INTJ | interjection | | NOUN | noun | | NUM | numeral | | PART | particle | | PRON | pronoun | | PROPN | proper noun | | PUNCT | punctuation | | SCONJ | subordinating conjunction | | SYM | symbol | | VERB | verb | | X | other | Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF. --- ### Demo: How to use in Flair Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("flair/upos-multi-fast") # make example sentence sentence = Sentence("Ich liebe Berlin, as they say. ") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('pos'): print(entity) ``` This yields the following output: ``` Span [1]: "Ich" [โˆ’ Labels: PRON (0.9999)] Span [2]: "liebe" [โˆ’ Labels: VERB (0.9999)] Span [3]: "Berlin" [โˆ’ Labels: PROPN (0.9997)] Span [4]: "," [โˆ’ Labels: PUNCT (1.0)] Span [5]: "as" [โˆ’ Labels: SCONJ (0.9991)] Span [6]: "they" [โˆ’ Labels: PRON (0.9998)] Span [7]: "say" [โˆ’ Labels: VERB (0.9998)] Span [8]: "." [โˆ’ Labels: PUNCT (1.0)] ``` So, the words "*Ich*" and "*they*" are labeled as **pronouns** (PRON), while "*liebe*" and "*say*" are labeled as **verbs** (VERB) in the multilingual sentence "*Ich liebe Berlin, as they say*". --- ### Training: Script to train this model The following Flair script was used to train this model: ```python from flair.data import MultiCorpus from flair.datasets import UD_ENGLISH, UD_GERMAN, UD_FRENCH, UD_ITALIAN, UD_POLISH, UD_DUTCH, UD_CZECH, \ UD_DANISH, UD_SPANISH, UD_SWEDISH, UD_NORWEGIAN, UD_FINNISH from flair.embeddings import StackedEmbeddings, FlairEmbeddings # 1. make a multi corpus consisting of 12 UD treebanks (in_memory=False here because this corpus becomes large) corpus = MultiCorpus([ UD_ENGLISH(in_memory=False), UD_GERMAN(in_memory=False), UD_DUTCH(in_memory=False), UD_FRENCH(in_memory=False), UD_ITALIAN(in_memory=False), UD_SPANISH(in_memory=False), UD_POLISH(in_memory=False), UD_CZECH(in_memory=False), UD_DANISH(in_memory=False), UD_SWEDISH(in_memory=False), UD_NORWEGIAN(in_memory=False), UD_FINNISH(in_memory=False), ]) # 2. what tag do we want to predict? tag_type = 'upos' # 3. make the tag dictionary from the corpus tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type) # 4. initialize each embedding we use embedding_types = [ # contextual string embeddings, forward FlairEmbeddings('multi-forward-fast'), # contextual string embeddings, backward FlairEmbeddings('multi-backward-fast'), ] # embedding stack consists of Flair and GloVe embeddings embeddings = StackedEmbeddings(embeddings=embedding_types) # 5. initialize sequence tagger from flair.models import SequenceTagger tagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=False) # 6. initialize trainer from flair.trainers import ModelTrainer trainer = ModelTrainer(tagger, corpus) # 7. run training trainer.train('resources/taggers/upos-multi-fast', train_with_dev=True, max_epochs=150) ``` --- ### Cite Please cite the following paper when using this model. ``` @inproceedings{akbik2018coling, title={Contextual String Embeddings for Sequence Labeling}, author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland}, booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics}, pages = {1638--1649}, year = {2018} } ``` --- ### Issues? The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
huggingtweets/porns_xx
huggingtweets
"2021-08-07T13:34:18Z"
1,106
18
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: en thumbnail: https://www.huggingtweets.com/porns_xx/1628343064919/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1423389132508782593/Meo5eDzd_400x400.png&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐Ÿค– AI BOT ๐Ÿค–</div> <div style="text-align: center; font-size: 16px; font-weight: 800">PORN HUB ๐Ÿ”ž</div> <div style="text-align: center; font-size: 14px;">@porns_xx</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from PORN HUB ๐Ÿ”ž. | Data | PORN HUB ๐Ÿ”ž | | --- | --- | | Tweets downloaded | 1399 | | Retweets | 0 | | Short tweets | 7 | | Tweets kept | 1392 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/200x5dgt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @porns_xx's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ha11ly3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ha11ly3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/porns_xx') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
timm/maxvit_xlarge_tf_224.in21k
timm
"2023-05-11T00:31:27Z"
1,106
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-21k", "arxiv:2204.01697", "license:apache-2.0", "region:us" ]
image-classification
"2023-05-11T00:24:21Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-21k --- # Model card for maxvit_xlarge_tf_224.in21k An official MaxViT image classification model. Trained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 507.0 - GMACs: 97.5 - Activations (M): 191.0 - Image size: 224 x 224 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-21k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_xlarge_tf_224.in21k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_xlarge_tf_224.in21k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 192, 112, 112]) # torch.Size([1, 192, 56, 56]) # torch.Size([1, 384, 28, 28]) # torch.Size([1, 768, 14, 14]) # torch.Size([1, 1536, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_xlarge_tf_224.in21k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1536, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
cyl/awsome-llama
cyl
"2023-05-30T02:44:39Z"
1,106
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-29T06:02:06Z"
Entry not found
TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF
TheBloke
"2023-09-27T12:48:36Z"
1,106
12
transformers
[ "transformers", "gguf", "llama", "en", "arxiv:2306.02707", "base_model:fangloveskari/ORCA_LLaMA_70B_QLoRA", "license:llama2", "text-generation-inference", "region:us" ]
null
"2023-09-09T17:17:05Z"
--- language: - en license: llama2 library_name: transformers model_name: ORCA LLaMA 70B QLoRA base_model: fangloveskari/ORCA_LLaMA_70B_QLoRA inference: false model_creator: fangloveskari model_type: llama prompt_template: 'Info on prompt template will be added shortly. ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # ORCA LLaMA 70B QLoRA - GGUF - Model creator: [fangloveskari](https://huggingface.co/fangloveskari) - Original model: [ORCA LLaMA 70B QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA) <!-- description start --> ## Description This repo contains GGUF format model files for [fangloveskari's ORCA LLaMA 70B QLoRA](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF) * [fangloveskari's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/fangloveskari/ORCA_LLaMA_70B_QLoRA) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: TBC ``` Info on prompt template will be added shortly. ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [orca_llama_70b_qlora.Q2_K.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes | | [orca_llama_70b_qlora.Q3_K_S.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss | | [orca_llama_70b_qlora.Q3_K_M.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss | | [orca_llama_70b_qlora.Q3_K_L.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss | | [orca_llama_70b_qlora.Q4_0.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [orca_llama_70b_qlora.Q4_K_S.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss | | [orca_llama_70b_qlora.Q4_K_M.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended | | [orca_llama_70b_qlora.Q5_0.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [orca_llama_70b_qlora.Q5_K_S.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended | | [orca_llama_70b_qlora.Q5_K_M.gguf](https://huggingface.co/TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF/blob/main/orca_llama_70b_qlora.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended | | orca_llama_70b_qlora.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss | | orca_llama_70b_qlora.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. ### Q6_K and Q8_0 files are split and require joining **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files. <details> <summary>Click for instructions regarding Q6_K and Q8_0 files</summary> ### q6_K Please download: * `orca_llama_70b_qlora.Q6_K.gguf-split-a` * `orca_llama_70b_qlora.Q6_K.gguf-split-b` ### q8_0 Please download: * `orca_llama_70b_qlora.Q8_0.gguf-split-a` * `orca_llama_70b_qlora.Q8_0.gguf-split-b` To join the files, do the following: Linux and macOS: ``` cat orca_llama_70b_qlora.Q6_K.gguf-split-* > orca_llama_70b_qlora.Q6_K.gguf && rm orca_llama_70b_qlora.Q6_K.gguf-split-* cat orca_llama_70b_qlora.Q8_0.gguf-split-* > orca_llama_70b_qlora.Q8_0.gguf && rm orca_llama_70b_qlora.Q8_0.gguf-split-* ``` Windows command line: ``` COPY /B orca_llama_70b_qlora.Q6_K.gguf-split-a + orca_llama_70b_qlora.Q6_K.gguf-split-b orca_llama_70b_qlora.Q6_K.gguf del orca_llama_70b_qlora.Q6_K.gguf-split-a orca_llama_70b_qlora.Q6_K.gguf-split-b COPY /B orca_llama_70b_qlora.Q8_0.gguf-split-a + orca_llama_70b_qlora.Q8_0.gguf-split-b orca_llama_70b_qlora.Q8_0.gguf del orca_llama_70b_qlora.Q8_0.gguf-split-a orca_llama_70b_qlora.Q8_0.gguf-split-b ``` </details> <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF and below it, a specific filename to download, such as: orca_llama_70b_qlora.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF orca_llama_70b_qlora.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF orca_llama_70b_qlora.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m orca_llama_70b_qlora.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Info on prompt template will be added shortly." ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/ORCA_LLaMA_70B_QLoRA-GGUF", model_file="orca_llama_70b_qlora.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: fangloveskari's ORCA LLaMA 70B QLoRA # Dolphin_ORCA_PlatyPus_LLaMA_70b ### Dataset Here is the list of datasets used: * Dolphin * Open-Platypus * OpenOrca **mixed strategy: 100%Open-Platypus + ~1%Dolphin(GPT-4) + ~1%OpenOrca(GPT-4)** <br> **Model Finetuned By fangloveskari.** <br> ### Training FrameWork and Parameters #### FrameWork https://github.com/hiyouga/LLaMA-Efficient-Tuning We add flash_attention_2 and ORCA dataset support, with some minor modifications. <br> #### Parameters We list some training parameters here: | Parameter | Value | |-----------------------|-------------| | Finetune_Type | QLoRA(NF4) | | LoRA_Rank | 16 | | LoRA_Alpha | 16 | | Batch_Size | 14 | | GPUs | 8xA100(80G) | | LR_Scheduler | cosine | | LR | 3e-4 | | Epoch | 1 | | DeepSpeed | ZERO-2 | <br> ### Model Export We tried two methods to fuse the adapter back to the base model: * https://github.com/hiyouga/LLaMA-Efficient-Tuning/blob/main/src/export_model.py * https://github.com/jondurbin/qlora/blob/main/qmerge.py Generally, the second will get better ARC(+0.15) and Truthful_QA(+0.3) scores but the other two(MMLU(-0.2) and HelloSwag(-0.2)) seems to degenerate (Just for my model). <br> ### Evaluation | Metric | Value | |-----------------------|-------| | ARC (25-shot) | 72.27 | | HellaSwag (10-shot) | 87.74 | | MMLU (5-shot) | 70.23 | | TruthfulQA (0-shot) | 63.37 | | Avg. | 73.40 | <br> ### license disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. <br> ### Limitations & Biases: Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ <br> ### Citiation: Please kindly cite using the following BibTeX: ```bibtex @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ``` <!-- original-model-card end -->
PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test
PracticeLLM
"2024-01-10T09:25:55Z"
1,106
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "en", "ko", "arxiv:2312.15166", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-31T08:57:20Z"
--- language: - en - ko pipeline_tag: text-generation license: cc-by-nc-sa-4.0 tags: - merge --- # **Twice-KoSOLAR-16.1B-instruct-test** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **๋ชจ๋ธ ๋ชฉ์ ** <img src='./solar.png'> ์ตœ๊ทผ, SOLAR-10.7B ๋ชจ๋ธ์ด [Depth-Up-Scaling](https://arxiv.org/pdf/2312.15166.pdf)(์œ„์˜ ์‚ฌ์ง„) ๋ฐฉ๋ฒ•๋ก ์„ ๋‚ด์„ธ์›Œ์„œ LLM ๋ฆฌ๋”๋ณด๋“œ์—์„œ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด๊ณ  ์žˆ๋‹ค. ๋”๋ถˆ์–ด์„œ `์•ผ๋†€์ž`์—์„œ ๋งŒ๋“  `seungduk/KoSOLAR-10.7B-v0.1` ๋ชจ๋ธ์€ Ko-LLM ๋ฆฌ๋”๋ณด๋“œ์— ํฐ ํŒŒ๊ธ‰๋ ฅ์„ ๋ถˆ๋Ÿฌ์˜ค๋ฉด์„œ, ์•ž์œผ๋กœ์˜ ๋ฆฌ๋”๋ณด๋“œ์˜ ํ๋ฆ„๋„ ๋ฐ”๋€” ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋œ๋‹ค. ์—ฌ๊ธฐ์„œ ๋‹จ์ˆœํ•œ ํ˜ธ๊ธฐ์‹ฌ์ด ๋“ค์—ˆ๋‹ค. **Upstage์—์„œ ๋ฐœํ‘œํ•œ Depth-Up-Scaling(DUS) ๋ฐฉ๋ฒ•๋ก ์€ mistral-7B ๋ชจ๋ธ 2๊ฐœ๋ฅผ merge(passthrough)ํ•œ ๋ฐฉ๋ฒ•**์ด๋‹ค. ์ด๋•Œ ๋†€๋ž๊ฒŒ๋„, DUS ๋ฐฉ๋ฒ•๋ก ์„ ์ ์šฉํ•œ `upstage/SOLAR-10.7B-v1.0`๋ชจ๋ธ์€ ๊ธฐ์กด์˜ mistral-7B ๋ชจ๋ธ๋ณด๋‹ค ๋ฆฌ๋”๋ณด๋“œ์—์„œ ๋†’์€ ์„ฑ๋Šฅ์„ ๊ธฐ๋กํ–ˆ๋‹ค. (์•„๋ž˜์˜ ํ…Œ์ด๋ธ” ์ฐธ๊ณ ) ๊ทธ๋ ‡๋‹ค๋ฉด, DUS ๋ฐฉ๋ฒ•๋ก ์„ ์ œํ•œ์—†์ด, ๋‹ค๋ฅธ ๋ชจ๋ธ์— ์ ์šฉํ•˜๋ฉด ๋˜‘๊ฐ™์€ ๊ฒฐ๊ณผ๊ฐ€ ๋ฐœ์ƒํ• ์ง€ ๋„ˆ๋ฌด๋‚˜ ๊ถ๊ธˆํ–ˆ๋‹ค. ๐Ÿ™ƒ ์‹คํ—˜์„ ํ†ตํ•ด์„œ ๋‚˜์˜ ํ˜ธ๊ธฐ์‹ฌ์— ๋Œ€ํ•œ ๊ฒฐ๋ก ์„ ๋‚ด๋ ค๋ณด๊ณ ์ž ํ•œ๋‹ค. ๐Ÿ˜‹๐Ÿ˜‹ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | --- | --- | --- | --- | --- | --- | --- | --- | | [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | **66.04** | 62.03 | 84.54 | 65.56 | 45.03 | 83.58 | 55.50 | | [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | **66.04** | 61.95 | 84.60 | 65.48 | 45.04 | 83.66 | 55.50 | | [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 | > Follow up as [En-link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). **Method** Instruction-tuning. **Hyperparameters** ```python python finetune.py \ --base_model PracticeLLM/Twice-KoSOLAR-16.1B-test \ --data-path kyujinpy/KOR-OpenOrca-Platypus-v3 \ --output_dir ./Twice-KoSOLAR-16.1B-instruct-test \ --batch_size 64 \ --micro_batch_size 1 \ --num_epochs 1 \ --learning_rate 3e-5 \ --cutoff_len 4096 \ --val_set_size 0 \ --lora_r 16 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[q_proj, k_proj, v_proj, o_proj, gate_proj, down_proj, up_proj, lm_head]' \ --train_on_inputs False \ --add_eos_token False \ --group_by_length False \ --prompt_template_name user_prompt \ --lr_scheduler 'cosine' \ #--warmup_steps 100 \ ``` > Share all of things. It is my belief. # **Model Benchmark** ## Open Ko-LLM leaderboard & lm-evaluation-harness(zero-shot) - Follow up as [Ko-link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 | | --- | --- | --- | --- | --- | --- | --- | | PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test | 53.64 | 52.30 | 59.98 | 53.42 | 44.07 | 58.44 | | PracticeLLM/Twice-KoSOLAR-16.1B-test | 50.20 | 45.65 | 57.14 | 51.39 | 42.99 | 53.84 | | [Megastudy/M-SOLAR-10.7B-v1.1-beta](https://huggingface.co/Megastudy/M-SOLAR-10.7B-v1.1-beta) | 55.25 | 51.71 | 60.86 | 54.24 | 47.12 | 62.34 | | [jjourney1125/M-SOLAR-10.7B-v1.0](https://huggingface.co/jjourney1125/M-SOLAR-10.7B-v1.0) | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 | | [seungduk/KoSOLAR-10.7B-v0.1](https://huggingface.co/seungduk/KoSOLAR-10.7B-v0.1) | 52.40 | 47.18 | 59.54 | 52.04 | 41.84 | 61.39 | - Follow up as [beomi/LM-Harness](https://github.com/Beomi/ko-lm-evaluation-harness) ``` gpt2 (pretrained=PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.5100|ยฑ |0.0133| | | |macro_f1|0.3527|ยฑ |0.0079| |kobest_copa | 0|acc |0.6740|ยฑ |0.0148| | | |macro_f1|0.6732|ยฑ |0.0148| |kobest_hellaswag| 0|acc |0.4640|ยฑ |0.0223| | | |acc_norm|0.5480|ยฑ |0.0223| | | |macro_f1|0.4585|ยฑ |0.0223| |kobest_sentineg | 0|acc |0.6574|ยฑ |0.0238| | | |macro_f1|0.6184|ยฑ |0.0253| gpt2 (pretrained=PracticeLLM/Twice-KoSOLAR-16.1B-test), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.7201|ยฑ |0.0120| | | |macro_f1|0.7073|ยฑ |0.0124| |kobest_copa | 0|acc |0.6510|ยฑ |0.0151| | | |macro_f1|0.6506|ยฑ |0.0151| |kobest_hellaswag| 0|acc |0.4520|ยฑ |0.0223| | | |acc_norm|0.5820|ยฑ |0.0221| | | |macro_f1|0.4475|ยฑ |0.0222| |kobest_sentineg | 0|acc |0.7078|ยฑ |0.0229| | | |macro_f1|0.7071|ยฑ |0.0229| gpt2 (pretrained=Megastudy/M-SOLAR-10.7B-v1.1-beta), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.7137|ยฑ |0.0121| | | |macro_f1|0.6878|ยฑ |0.0128| |kobest_copa | 0|acc |0.7060|ยฑ |0.0144| | | |macro_f1|0.7054|ยฑ |0.0145| |kobest_hellaswag| 0|acc |0.4620|ยฑ |0.0223| | | |acc_norm|0.5360|ยฑ |0.0223| | | |macro_f1|0.4595|ยฑ |0.0223| |kobest_sentineg | 0|acc |0.7431|ยฑ |0.0220| | | |macro_f1|0.7295|ยฑ |0.0230| gpt2 (pretrained=jjourney1125/M-SOLAR-10.7B-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.5228|ยฑ |0.0133| | | |macro_f1|0.3788|ยฑ |0.0097| |kobest_copa | 0|acc |0.6860|ยฑ |0.0147| | | |macro_f1|0.6858|ยฑ |0.0147| |kobest_hellaswag| 0|acc |0.4580|ยฑ |0.0223| | | |acc_norm|0.5380|ยฑ |0.0223| | | |macro_f1|0.4552|ยฑ |0.0222| |kobest_sentineg | 0|acc |0.6474|ยฑ |0.0240| | | |macro_f1|0.6012|ยฑ |0.0257| gpt2 (pretrained=yanolja/KoSOLAR-10.7B-v0.1), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.8725|ยฑ |0.0089| | | |macro_f1|0.8722|ยฑ |0.0089| |kobest_copa | 0|acc |0.6850|ยฑ |0.0147| | | |macro_f1|0.6844|ยฑ |0.0147| |kobest_hellaswag| 0|acc |0.4340|ยฑ |0.0222| | | |acc_norm|0.5840|ยฑ |0.0221| | | |macro_f1|0.4296|ยฑ |0.0221| |kobest_sentineg | 0|acc |0.7506|ยฑ |0.0217| | | |macro_f1|0.7505|ยฑ |0.0217| ``` # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` --- Refereces (Model Card) # yanolja/KoSOLAR-10.7B-v0.1 This model is a Korean vocabulary-extended version of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0), trained on various Korean web-crawled datasets that are publicly available on HuggingFace. The hypothesis was that while maintaining the original performance of the base model, we could add more tokens to the base model's vocabulary by training the embeddings for the new tokens only. The evaluation results seem to indicate that both English and Korean performances were preserved. ## Model Description Most parameters of [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) were frozen except for the embed_tokens layer and the lm_head layer. Embeddings for the existing tokens in those layers were frozen during training. The embeddings for the new tokens have been tuned. --- # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!** # **Introduction** We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B. We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model. SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table. Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements ([SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)). For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
melonpower39/unv_v0.1.5
melonpower39
"2024-01-02T00:18:59Z"
1,106
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-01T23:06:45Z"
Entry not found
cocoirun/Yi-Ko-6B-instruct-v1.2
cocoirun
"2024-01-11T01:48:20Z"
1,106
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-08T01:27:58Z"
--- license: cc-by-sa-4.0 --- <h1>instruct ๋ชจ๋ธ v1.2</h1> <b><ํ•™์Šต ๋ฐ์ดํ„ฐ ๊ตฌ์ถ•></b> Open-Orca-ko ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์„ํ•˜์—ฌ ํƒœ์Šคํฌ๋ฅผ ์ถ”์ถœํ•œ ๋’ค ํ•ด๋‹น ํƒœ์Šคํฌ์— ๋งž์ถฐ์„œ NLP ๊ด€๋ จ ์˜คํ”ˆ์†Œ์Šค ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต๋ฐ์ดํ„ฐ๋ฅผ ์ž์ฒด์ ์œผ๋กœ ์•ฝ 4๋งŒ๊ฑด(์—ญ์‚ฌ, ๊ณผํ•™, ์ˆ˜ํ•™, ๊ธฐ๊ณ„๋…ํ•ด, ๋ฆฌ๋ทฐ ๋ถ„์„) ๊ตฌ์ถ•ํ•˜์˜€๊ณ , ๊ทธ ์™ธ์— Open-Orca-Ko์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์ผ๋ถ€ ํ•„ํ„ฐ๋งํ•˜์—ฌ ์ •์ œํ•ด๊ฑฐ๋‚˜ KoBEST ๋ฐ์ดํ„ฐ๋ฅผ ํ•จ๊ป˜ ์ถ”๊ฐ€ํ•˜์˜€์Šต๋‹ˆ๋‹ค. aihub ์ผ๋ฐ˜์ƒ์‹ ๋ฐ ๊ธฐ๊ณ„๋…ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ถ”๊ฐ€๋กœ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ๊ตฌ์ถ•(ํ˜•ํƒœ์†Œ ๊ด€๋ จ, ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ๋ฐ ์š”์•ฝ) ๊ฐ์ข… ๋ธ”๋กœ๊ทธ์—์„œ ์—ญ์‚ฌ ๋ฐ ์ƒ์‹ ํ€ด์ฆˆ๋ฅผ ์‚ฌ๋žŒ์ด ์ง์ ‘ ํ•™์Šต๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋กœ ๋ณ€๊ฒฝ AI2AI Challenge ๋ฐ์ดํ„ฐ๋ฅผ ํŒŒํŒŒ๊ณ ๋ฅผ ํ†ตํ•ด ๋ฒˆ์—ญ ๋ฐ ์˜ค์—ญ๋œ ๋ถ€๋ถ„์„ ์‚ฌ๋žŒ์ด ์ง์ ‘ ์ˆ˜์ • ํ•˜๋Š” ์ž‘์—…์„ ์ˆ˜ํ–‰ ์˜์–ด ๋ฒˆ์—ญ ๋ฐ์ดํ„ฐ ์˜ํ•œ/ํ•œ์˜ ๋ฐ์ดํ„ฐ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ํ™œ์šฉ ์ง„ํ–‰ ์ด 11๋งŒ๊ฐœ์˜ ํ•™์Šต๋ฐ์ดํ„ฐ๋กœ sft๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> ํ˜„์žฌ, ์ƒˆ๋กœ์šด ๋ฒ„์ „์˜ ๋ชจ๋ธ ํ•™์Šต ๋ฐ ์„ฑ๋Šฅ์„ ์œ„ํ•ด Open-Orca ๋ฐ์ดํ„ฐ์…‹ ์ผ๋ถ€๋ฅผ ๋ฒˆ์—ญํ•˜์—ฌ ์ •์ œ ์ค‘์— ์žˆ์Šต๋‹ˆ๋‹ค. <br> + ๊ณ ๋“ฑํ•™๊ต ์—ญ์‚ฌ ๋ฌธ์ œ ๋ฐ TruthfulQA ๊ด€๋ จ ๋ฌธ์ œ ์ถ”๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. + ๊ฐ์ข… it ์ง€์‹ ๋ฐ์ดํ„ฐ ์ถ”๊ฐ€์ง„ํ–‰. + ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ChatGPT๋ฅผ ํ†ตํ•ด์„œ ๋‹ต๋ณ€์„ ์–ป์–ด ํ•™์Šต + ๋ฌธ๋ฒ•๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ <br> ###ํ•™์Šต ๋ฐ์ดํ„ฐ ํŒŒ์ผ์€ ๋น„๊ณต๊ฐœ์ž…๋‹ˆ๋‹ค. <br> <b><ํ•™์Šต></b> ํ•™์Šต์€ LoRA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ A100 40G *2์—์„œ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v0.71
jungyuko
"2024-01-28T12:26:32Z"
1,106
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-28T11:40:57Z"
--- license: cc-by-nc-4.0 --- ## DAVinCI-42dot_LLM-PLM-1.3B-v0.71 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hyperparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 24 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 4 * total_train_batch_size: 96 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
ghost-x/ghost-7b-v0.9.0
ghost-x
"2024-03-10T12:10:11Z"
1,106
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "ghost", "conversational", "en", "vi", "base_model:HuggingFaceH4/zephyr-7b-beta", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-29T16:01:08Z"
--- language: - en - vi license: mit library_name: transformers tags: - ghost pipeline_tag: text-generation base_model: HuggingFaceH4/zephyr-7b-beta widget: - text: '<|system|> You are a helpful assistant.</s> <|user|> Thรดng tin vแป Peristernia despecta</s> <|assistant|> ' output: text: Peristernia despecta lร  mแป™t loร i แป‘c biแปƒn, lร  ฤ‘แป™ng vแบญt thรขn mแปm chรขn bแปฅng sแป‘ng แปŸ biแปƒn trong hแป Fasciolariidae. model-index: - name: lamhieu/ghost-7b-v0.9.0 results: - task: type: text-generation dataset: name: VMLU type: vmlu_v1.5 metrics: - type: avg value: 36.06 name: Average verified: true - type: stem value: 33.54 name: STEM verified: true - type: ss value: 38.74 name: Social science verified: true - type: hm value: 37.15 name: Humanities verified: true - type: ot value: 36.78 name: Other verified: true - task: type: text-generation dataset: name: Open LLM Leaderboard type: open_llm_leaderboard metrics: - type: avg value: 56.89 name: Average verified: true - type: arc value: 53.07 name: ARC verified: true - type: hs value: 77.93 name: HellaSwag verified: true - type: hs value: 77.93 name: HellaSwag verified: true - type: mmlu value: 55.09 name: MMLU verified: true - type: wg value: 73.72 name: Winogrande verified: true - type: gsm8k value: 33.74 name: GSM8K verified: true source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 53.07 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.93 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 55.09 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 47.79 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 73.72 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 33.74 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=lamhieu/ghost-7b-v0.9.0 name: Open LLM Leaderboard --- # Model Card for Model ID **Ghost 7B Alpha, flying, v0.9.0** ## Model Details ### Model Description This model is fine tuned from **HuggingFaceH4/zephyr-7b-beta** on a small synthetic datasets (about 200MB) for 50% English and 50% Vietnamese. - **Developed by:** **Lam H** - **Language(s) (NLP):** English, Vietnamese - **License:** MIT - **Finetuned from model:** [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) ## Uses This model supports both conversation chat and tasks. Feel free to experiment and don't limit your creativity. The simplest way to try it is to use the `pipeline` from `transformers`. ```python import torch from transformers import pipeline pipe = pipeline( "text-generation", model="lamhieu/ghost-7b-v0.9.0", torch_dtype=torch.bfloat16, ) ``` You can then try any of the sample codes below, formatted using the chat template. ```python messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "nรณi tรดi biแบฟt bแป‡nh dแป‹ch hแบกch แปŸ chรขu ร‚u do khuแบฉn nร o gรขy ra"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False) outputs = pipe.model.generate(**tokenized, max_new_tokens=512) results = tokenizer.batch_decode(outputs)[0] print(results) # Bแป‡nh dแป‹ch hแบกch แปŸ chรขu ร‚u do khuแบฉn gรขy ra lร  do khuแบฉn Yersinia pestis. ``` ```python messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Thรดng tin vแป Peristernia despecta"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False) outputs = pipe.model.generate(**tokenized, max_new_tokens=512) results = tokenizer.batch_decode(outputs)[0] print(results) # Peristernia despecta lร  mแป™t loร i แป‘c biแปƒn, lร  ฤ‘แป™ng vแบญt thรขn mแปm chรขn bแปฅng sแป‘ng แปŸ biแปƒn trong hแป Fasciolariidae. # ... ``` ```python messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "do u know vietnam ?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False) outputs = pipe.model.generate(**tokenized, max_new_tokens=512) results = tokenizer.batch_decode(outputs)[0] print(results) # Yes, I have knowledge about Vietnam. Vietnam is a country in Southeast Asia, bordered by China to the north, Laos and Cambodia to the west, and the South China Sea to the east and south. Its capital city is Hanoi, and its largest city is Ho Chi Minh City (formerly known as Saigon). Vietnam has a population of approximately 100 million people and a diverse cultural heritage influenced by both Chinese and French colonialism. The country has a rich history, including periods of independence, colonization, and resistance, and has experienced significant economic growth in recent years. ``` ```python messages = [ {"role": "system", "content": "You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old."}, {"role": "user", "content": "Tรดi yรชu em nhiแปu hฦกn em nghฤฉ.\n\nWhich language is this?"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False) outputs = pipe.model.generate(**tokenized, max_new_tokens=512) results = tokenizer.batch_decode(outputs)[0] print(results) # This is Vietnamese language. Vietnamese is a language spoken mainly in Vietnam and by the Vietnamese diaspora in many other countries. The sentence you provided means "I love you more than you think." It's like you have more love for someone than they realize. ``` Another example of what you can use to chat multiple turns. ```python messages = [ # {"role": "system", "content": "You are a helpful and knowledgeable assistant. You like to help and always give honest information, in its original language. In communication, you are always respectful, equal and promote positive behavior."}, {"role": "system", "content": "You are a helpful assistant."}, # Describe to your assistant, anything. {"role": "user", "content": "Bla bla bla"}, {"role": "assistant", "content": "Bla bla bla"}, {"role": "user", "content": "Bla bla bla"}, ] prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) tokenized = pipe.tokenizer(prompt, return_tensors="pt", add_special_tokens=False) outputs = pipe.model.generate(**tokenized, max_new_tokens=512) results = tokenizer.batch_decode(outputs)[0] print(results) ``` ## Evaluation ### Results #### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lamhieu__ghost-7b-v0.9.0) | Metric |Value| |---------------------------------|----:| |Avg. |56.89| |AI2 Reasoning Challenge (25-Shot)|53.07| |HellaSwag (10-Shot) |77.93| |MMLU (5-Shot) |55.09| |TruthfulQA (0-shot) |47.79| |Winogrande (5-shot) |73.72| |GSM8k (5-shot) |33.74| #### VMLU Below are the results evaluated with the VMLU evaluation suite, which is often used to evaluate models that work with Vietnamese. Note: the results are run with the model in 4bit quantization, I'm not sure if it has any loss in results or not, if someone can help me run it with full it would be great. ![VMLU - lamhieu/ghost-7b-v0.9.0](https://cdn-uploads.huggingface.co/production/uploads/600ae38cc92b79f54efd4556/GdMgr0-YnAGRqD_RJr_ux.png) <details> <summary>Details</summary> ```python { "stem": { "elementary_mathematics": 32.22, "elementary_science": 56.11, "high_school_biology": 32.78, "high_school_chemistry": 27.78, "high_school_mathematics": 33.78, "high_school_physics": 26.11, "introduction_to_chemistry": 26.82, "introduction_to_physics": 33.53, "introduction_to_programming": 39.66, "metrology_engineer": 36.17, "middle_school_biology": 40, "middle_school_chemistry": 26.67, "middle_school_mathematics": 27.78, "middle_school_physics": 27.22, "operating_system": 38.33, "statistics_and_probability": 18.39, "total": 33.54, "applied_informatics": 47.78, "computer_architecture": 36.11, "computer_network": 41.34, "discrete_mathematics": 29.7, "electrical_engineering": 26.14 }, "other": { "total": 36.78, "accountant": 29.17, "civil_servant": 29.82, "clinical_pharmacology": 35.56, "driving_license_certificate": 56.73, "environmental_engineering": 32.16, "internal_basic_medicine": 36.84, "preschool_pedagogy": 45.1, "tax_accountant": 24.71, "tax_civil_servant": 40.94 }, "total": 36.06, "humanity": { "introduction_to_vietnam_culture": 31.11, "logic": 28.16, "middle_school_history": 38.33, "administrative_law": 32.22, "revolutionary_policy_of_the_vietnamese_commununist_part": 40.56, "vietnamese_language_and_literature": 35.06, "total": 37.15, "middle_school_literature": 36.21, "business_law": 38.55, "civil_law": 48.33, "criminal_law": 37.42, "economic_law": 38.51, "education_law": 36.75, "elementary_history": 35.03, "high_school_history": 27.78, "high_school_literature": 32.78, "history_of_world_civilization": 43.33, "idealogical_and_moral_cultivation": 39.44, "introduction_to_laws": 49.21 }, "social_science": { "business_administration": 37.36, "high_school_civil_education": 42.78, "high_school_geography": 38.27, "ho_chi_minh_ideology": 40.22, "macroeconomics": 27.78, "microeconomics": 36.67, "middle_school_civil_education": 51.69, "middle_school_geography": 32.65, "principles_of_marxism_and_leninism": 35.56, "sociology": 44.38, "total": 38.74 } } ``` </details> ## More Information Many thanks for - Datasets: [5CD-AI](https://huggingface.co/5CD-AI), [vilm](https://huggingface.co/vilm). - Library: [unsloth](https://github.com/unslothai/unsloth) ## Model Card Contact **Lam H** ([email protected])
hyeogi/SOLAR-10.7B-v1.3
hyeogi
"2024-02-09T05:51:06Z"
1,106
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "SOLAR-10.7B", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-09T05:36:06Z"
--- language: - ko pipeline_tag: text-generation tags: - SOLAR-10.7B license: cc-by-nc-4.0 --- # SOLAR-10.7B ### Model Details - Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2) ### Datasets - sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) - sampling and translate [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) - translate [GAIR/lima](https://huggingface.co/datasets/GAIR/lima) - [jojo0217/korean_rlhf_dataset](https://huggingface.co/datasets/jojo0217/korean_rlhf_dataset) ### Benchmark
TeamUNIVA/Komodo_6B_v2.0.0
TeamUNIVA
"2024-03-04T09:24:45Z"
1,106
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-09T16:09:00Z"
--- license: apache-2.0 language: - ko - en --- # Base Model beomi/Yi-Ko-6B ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TeamUNIVA/Komodo_6B_v2.0.0" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) text = '''<|system|> ๋‹น์‹ ์€ ์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์— ์นœ์ ˆํ•˜๊ฒŒ ๋‹ต๋ณ€์„ ํ•˜๋Š” ์ฑ—๋ด‡์ž…๋‹ˆ๋‹ค. <|user|> ์•ˆ๋…•ํ•˜์„ธ์š”? <|bot|> ''' inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.12
jungyuko
"2024-02-21T01:55:40Z"
1,106
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-21T01:40:46Z"
--- license: cc-by-nc-4.0 --- ## DAVinCI-42dot_LLM-PLM-1.3B-v1.12 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset. ### Model description More information needed ### Intended uses & limitations More information needed ### Training and evaluation data More information needed ### Training procedure ### Training hyperparameters The following hyperparameters were used during training: * learning_rate: 2e-05 * train_batch_size: 24 * eval_batch_size: 8 * seed: 42 * gradient_accumulation_steps: 4 * total_train_batch_size: 96 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr_scheduler_type: linear * num_epochs: 1.0 * mixed_precision_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.0.0 * Tokenizers 0.15.0
chlee10/T3Q-Merge-SOLAR
chlee10
"2024-03-13T06:54:31Z"
1,106
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-12T08:17:44Z"
--- license: apache-2.0 --- ## T3Q-Merge-SOLAR T3Q-Merge-SOLAR is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5) * [hwkwon/S-SOLAR-10.7B-SFT-v1.2](https://huggingface.co/hwkwon/S-SOLAR-10.7B-SFT-v1.2) **Model Developers** Chihoon Lee(chlee10), T3Q ```yaml slices: - sources: - model: davidkim205/komt-solar-10.7b-sft-v5 layer_range: [0, 48] - model: hwkwon/S-SOLAR-10.7B-SFT-v1.2 layer_range: [0, 48] merge_method: slerp base_model: davidkim205/komt-solar-10.7b-sft-v5 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: bfloat16 ```
Heoni/Aguie-chat_v0.1
Heoni
"2024-04-04T12:17:48Z"
1,106
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-04T10:02:35Z"
--- license: cc-by-nc-nd-4.0 language: - ko - en --- ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์†์€ ๋ˆˆ๋ณด๋‹ค ๋น ๋ฅด๋‹ค! ๋ฌด์Šจ ํŒจ๋ฅผ ์žก๊ณ  ์‹ถ๋‹ˆ? ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ๋ˆ์„ ๋ฒŒ๊ณ  ์‹ถ๋‹ˆ? ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ๋ถ€์ž๊ฐ€๋˜๊ณ  ์‹ถ๋‹ˆ? ๋ถ€์ž๊ฐ€๋˜๊ณ  ์‹ถ์–ด? ํ™”ํˆฌํ•˜๋ฉด ๋Œ€ํ•œ๋ฏผ๊ตญ์— ๋”ฑ ์„ธ ๋ช…์ด์•ผ. ๊ฒฝ์ƒ๋„์— ์ง๊ท€, ์ „๋ผ๋„์— ์•„๊ท€, ๊ธฐ์นด๊ณ  ์ „๊ตญ์ ์œผ๋กœ ๋‚˜! ์˜ˆ์ „์— ์ง๊ท€๋ž‘ ์•„๊ท€๊ฐ€ ํ•œํŒ ๋ถ™์—ˆ๋Š”๋ฐ, ์•„๊ท€๊ฐ€ ์ง๊ท€์˜ ๊ท€๋ฅผ ์งค๋ผ ๋ฒ„๋ ธ์–ด. ๊ธฐ๋ž˜์„œ ์ง๊ท€์•ผ # Aguie-chat_v0.1 <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a continual learning version of Aguie_v0.1 ### Trained Data - 3,000,000 inst data ### License This model is licensed under the cc-by-nc-nd-4.0.
QuantumIntelligence/QI-mistral-7B-slerp
QuantumIntelligence
"2024-04-11T11:58:19Z"
1,106
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "Korean", "Mistral-7B", "LLM", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-09T04:11:06Z"
--- license: apache-2.0 tags: - merge - Korean - Mistral-7B - LLM --- # QI-mistral-7B-slerp This model is based on the mistral model and merged several DPO fine-tuned models with SLERP. It processes Korean language relatively well, so it is useful when creating various applications. QI-mistral-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## ๐Ÿงฉ Configuration ```yaml slices: - sources: - model: OpenPipe/mistral-ft-optimized-1218 layer_range: [0, 32] - model: mlabonne/NeuralHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: OpenPipe/mistral-ft-optimized-1218 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ### Basic Usage ``` from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig import transformers import torch model_id = "QuantumIntelligence/QI-mistral-7B-slerp" tokenizer = AutoTokenizer.from_pretrained(model_id) # model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True) # quantization pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", tokenizer=tokenizer, ) prompt = """Classify the text into neutral, negative or positive. Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen. Sentiment: """ outputs = pipeline(prompt, max_new_tokens=6) print(outputs[0]["generated_text"]) ``` ### Using Korean - Sentiment ``` # prompt = """ # ๋‹ค์Œ ํ…์ŠคํŠธ๋ฅผ ์ค‘๋ฆฝ, ๋ถ€์ •, ๊ธ์ •์œผ๋กœ ๋ถ„๋ฅ˜ํ•ด์ค˜. # ํ…์ŠคํŠธ: ํ•˜๋Š˜์„ ๋ณด๋‹ˆ ๋น„๊ฐ€ ์˜ฌ๋“ฏ ํ•˜๋‹ค. ์šฐ์šธํ•œ ๊ธฐ๋ถ„์ด ๋“ค์–ด์„œ ์ˆ ์„ ํ•œ์ž” ํ• ๊นŒ ๊ณ ๋ฏผ์ค‘์ธ๋ฐ ๊ฐ™์ด ๋งˆ์‹ค ์‚ฌ๋žŒ์ด ์—†๋‹ค. # ๊ฐ์ •: # """ outputs = pipeline(prompt, max_new_tokens=6) print(outputs[0]["generated_text"]) # ``` - Summarization ``` prompt = """ ์ด์ˆœ์‹ (ํ•œ๊ตญ ํ•œ์ž: ๆŽ่ˆœ่‡ฃ, 1545๋…„ 4์›” 28์ผ (์Œ๋ ฅ 3์›” 8์ผ) ~ 1598๋…„ 12์›” 16์ผ (์Œ๋ ฅ 11์›” 19์ผ))์€ ์กฐ์„  ์ค‘๊ธฐ์˜ ๋ฌด์‹ ์ด์—ˆ๋‹ค. ๋ณธ๊ด€์€ ๋•์ˆ˜(ๅพทๆฐด), ์ž๋Š” ์—ฌํ•ด(ๆฑ่ซง), ์‹œํ˜ธ๋Š” ์ถฉ๋ฌด(ๅฟ ๆญฆ)์˜€์œผ๋ฉฐ, ํ•œ์„ฑ ์ถœ์‹ ์ด์—ˆ๋‹ค. ๋ฌธ๋ฐ˜ ๊ฐ€๋ฌธ ์ถœ์‹ ์œผ๋กœ 1576๋…„(์„ ์กฐ 9๋…„) ๋ฌด๊ณผ(ๆญฆ็ง‘)์— ๊ธ‰์ œ[2]ํ•˜์—ฌ ๊ทธ ๊ด€์ง์ด ๋™๊ตฌ๋น„๋ณด ๊ถŒ๊ด€, ํ›ˆ๋ จ์› ๋ด‰์‚ฌ, ๋ฐœํฌ์ง„ ์ˆ˜๊ตฐ๋งŒํ˜ธ, ์กฐ์‚ฐ๋ณด ๋งŒํ˜ธ, ์ „๋ผ๋‚จ๋„์ˆ˜์‚ฌ๋ฅผ ๊ฑฐ์ณ ์ •ํ—Œ๋Œ€๋ถ€ ์‚ผ๋„์ˆ˜๊ตฐํ†ต์ œ์‚ฌ์— ์ด๋ฅด๋ €๋‹ค. ํ•จ๊ฒฝ๋„ ๋™๊ตฌ๋น„๋ณด๊ถŒ๊ด€(่‘ฃไป‡้žๅ กๆฌŠ็ฎก), 1581๋…„ ๋ฐœํฌ ์ˆ˜๊ตฐ๋งŒํ˜ธ(้‰ขๆตฆๆฐด่ป่ฌๆˆถ)๊ฐ€ ๋˜์—ˆ๋‹ค๊ฐ€ ์ „๋ผ๋‚จ์ˆ˜์˜์˜ ์˜ค๋™๋‚˜๋ฌด๋ฅผ ๋ฒ ๊ธฐ๋ฅผ ๊ฑฐ์ ˆํ•˜์—ฌ ์ขŒ์ˆ˜์‚ฌ ์„ฑ๋ฐ•์˜ ๋ฏธ์›€์„ ๋ฐ›๊ธฐ๋„ ํ–ˆ๋‹ค. ์ดํ›„ 1584๋…„ ๋‚จ๋ณ‘์‚ฌ์˜ ๊ตฐ๊ด€๊ณผ ๊ฑด์›๋ณด๊ถŒ๊ด€, ํ›ˆ๋ จ์›์ฐธ๊ตฐ, 1586๋…„ ์‚ฌ๋ณต์‹œ์ฃผ๋ถ€๋ฅผ ๊ฑฐ์ณ ์กฐ์‚ฐ๋ณด๋งŒํ˜ธ ๊ฒธ ๋…น๋„๋‘”์ „์‚ฌ์˜(้€ ๅฑฑๅ ก่ฌๆˆถๅ…ผ้นฟๅณถๅฑฏ็”ฐไบ‹ๅฎœ)๋กœ ๋ถ€์ž„ํ–ˆ๋‹ค. ์กฐ์‚ฐ๋งŒํ˜ธ ๊ฒธ ๋…น๋‘”๋„์‚ฌ์˜ ์žฌ์ง ์ค‘ 1587๋…„(์„ ์กฐ 20๋…„) 9์›”์˜ ์—ฌ์ง„์กฑ์˜ ์‚ฌ์ „ ๊ธฐ์Šต๊ณต๊ฒฉ์œผ๋กœ ๋ฒŒ์–ด์ง„ ๋…น๋‘”๋„์ „ํˆฌ์—์„œ ์ด๊ฒผ์ง€๋งŒ ํ”ผํ•ด๊ฐ€ ์ปค์„œ, ๋ถ๋ณ‘์‚ฌ ์ด์ผ์˜ ํƒ„ํ•ต์„ ๋ฐ›๊ณ  ๋ฐฑ์˜์ข…๊ตฐ(็™ฝ่กฃๅพž่ป)ํ•˜๋Š” ์œ„์น˜์— ์„œ๊ธฐ๋„ ํ–ˆ๋‹ค. ๊ทธ ๋’ค ๋‘๋ฒˆ์งธ ์—ฌ์ง„์กฑ๊ณผ์˜ ๊ต์ „์—์„œ ์Šน์ „, ๋ณต์งํ•˜์˜€๋‹ค. ๊ทธ ๋’ค ์ „๋ผ๊ด€์ฐฐ์‚ฌ ์ด๊ด‘(ๆŽๆดธ)์—๊ฒŒ ๋ฐœํƒ๋˜์–ด ์ „๋ผ๋„ ์กฐ๋ฐฉ์žฅ, ์„ ์ „๊ด€ ๋“ฑ์„ ์—ญ์ž„ํ–ˆ๋‹ค. 1589๋…„ ์ •์ํ˜„๊ฐ ์žฌ์ง ์ค‘ ๋ฅ˜์„ฑ๋ฃก์˜ ์ถ”์ฒœ์œผ๋กœ ๊ณ ์‚ฌ๋ฆฌ์ฒจ์‚ฌ(้ซ˜ๆฒ™้‡Œๅƒ‰ไฝฟ)๊ฐ€ ๋˜๊ณ , ์ ˆ์ถฉ์žฅ๊ตฐ(ๆŠ˜่กๅฐ‡่ป), ๋งŒํฌ์ง„์ฒจ์‚ฌ(ๆปฟๆตฆ้Žญๅƒ‰ไฝฟ), ์ง„๋„๊ตฐ์ˆ˜ ๋“ฑ์„ ๊ฑฐ์ณ ์ „๋ผ์ขŒ๋„์ˆ˜๊ตฐ์ ˆ๋„์‚ฌ๊ฐ€ ๋˜์–ด ์ž„์ง„์™œ๋ž€์„ ๋งŒ๋‚˜๊ฒŒ ๋˜์—ˆ๋‹ค. ์ž„์ง„์™œ๋ž€ ๋•Œ ์กฐ์„ ์˜ ์‚ผ๋„์ˆ˜๊ตฐํ†ต์ œ์‚ฌ๊ฐ€ ๋˜์–ด ๋ถ€ํ•˜๋“ค์„ ํ†ต์†”ํ•˜๋Š” ์ง€๋„๋ ฅ, ๋›ฐ์–ด๋‚œ ์ง€๋žต, ๊ทธ๋ฆฌ๊ณ  ํƒ์›”ํ•œ ์ „๋žต๊ณผ ๋Šฅ์ˆ˜๋Šฅ๋ž€ํ•œ ์ „์ˆ ๋กœ ์ผ๋ณธ ์ˆ˜๊ตฐ๊ณผ์˜ ํ•ด์ „์—์„œ ์—ฐ์ „์—ฐ์Šนํ•ด ๋‚˜๋ผ๋ฅผ ๊ตฌํ•œ ์„ฑ์›…(่–้›„)์œผ๋กœ ์ถ”์•™๋ฐ›๊ณ  ์žˆ๋‹ค. ๋…ธ๋Ÿ‰ ํ•ด์ „์—์„œ ์ „์‚ฌํ•œ ๋’ค ์„ ๋ฌด๊ณต์‹  1๋“ฑ๊ด€์— ์ถ”๋ก๋˜๊ณ  ์ฆ ์˜์ •๋ถ€์šฐ์˜์ •์— ์ถ”์ฆ๋˜๊ณ  ๋•ํ’๊ตฐ์— ์ถ”๋ด‰๋˜์—ˆ๋‹ค๊ฐ€, ๊ด‘ํ•ด๊ตฐ ๋•Œ ๋‹ค์‹œ ์ฆ ์˜์ •๋ถ€์ขŒ์˜์ •์— ์ถ”์ฆ๋˜๊ณ  ๋•ํ’๋ถ€์›๊ตฐ์— ์ถ”๋ด‰๋˜์—ˆ๊ณ , ์ •์กฐ ๋•Œ์—๋Š” ์ฆ ์˜์ •๋ถ€์˜์˜์ •์œผ๋กœ ๊ฐ€์ฆ(ๅŠ ่ดˆ)๋˜์—ˆ๋‹ค. ๊ณ ๋ ค ๋•Œ ์ •5ํ’ˆ ์ค‘๋ž‘์žฅ(ไธญ้ƒŽๅฐ‡)์„ ์ง€๋‚ธ ๋•์ˆ˜ ์ด์”จ์˜ ์‹œ์กฐ ์ด๋ˆ์ˆ˜(ๆŽๆ•ฆๅฎˆ)์˜ 12๋Œ€์†์ด๋ฉฐ, ์กฐ์„  ์ดˆ ์˜์ค‘์ถ”๋ถ€์‚ฌ(้ ˜ไธญๆจžๅบœไบ‹)๋ฅผ ์ง€๋‚ธ ์ด๋ณ€(ๆŽ้‚Š)[3]์˜ ํ›„์†์ด๋‹ค. ์™ธ๊ฐ€๋Š” ์ดˆ๊ณ„ ๋ณ€์”จ(ๅžๆฐ), ์ฒ˜๊ฐ€๋Š” ์˜จ์–‘ ๋ฐฉ์”จ(ๆ–นๆฐ, ๋‹น์‹œ์—๋Š” ์ƒ์ฃผ ๋ฐฉ์”จ)์ด๋‹ค. ๊ทธ์˜ ๋ฌ˜๋Š” ์ถฉ์ฒญ๋‚จ๋„ ์•„์‚ฐ์‹œ์— ์žˆ๋‹ค. ์œ„ ๋ฌธ์žฅ์„ 300์ž๋‚ด๋กœ ์š”์•ฝํ•ด์ค˜. ์š”์•ฝ: """ outputs = pipeline(prompt, max_new_tokens=300, do_sample=True, top_k=50, return_full_text = False) print(outputs[0]["generated_text"]) ``` - Question answering ``` prompt = """ ๋‹ค์Œ ๋ฌธ๋งฅ์— ๋Œ€ํ•ด ์•„๋ž˜ ์งˆ๋ฌธ์— ๋Œ€ํ•ด ๋‹ตํ•ด์ค˜. ๋ฌธ๋งฅ: 1565๋…„ ์ด์ˆœ์‹ ์€ ๋ฐฉ์”จ(ๆ–นๆฐ)์™€ ํ˜ผ์ธํ•˜๊ณ  ๋ณด์„ฑ๊ตฐ์ˆ˜๋ฅผ ์ง€๋‚ธ ์žฅ์ธ ๋ฐฉ์ง„์˜ ํ›„์›์œผ๋กœ ๋ณ‘ํ•™์„ ๋ฐฐ์šฐ๋ฉด์„œ ๋ฌด๊ณผ(ๆญฆ็ง‘)๋ฅผ ์ค€๋น„ํ•˜์˜€๋‹ค. 28์‚ด์ด๋˜ 1572๋…„(์„ ์กฐ 5๋…„) ํ›ˆ๋ จ์› ๋ณ„๊ณผ(่จ“้Œฌ้™ข ๅˆฅ็ง‘)์— ์‘์‹œํ–ˆ์œผ๋‚˜ ์‹œํ—˜์„ ๋ณด๋˜ ์ค‘, ๋ง์—์„œ ๋‚™๋งˆํ•˜์—ฌ ์ฃผ๋ณ€ ์‚ฌ๋žŒ๋“ค์ด ๊ธฐ์ ˆํ•œ ์ค„ ์•Œ์•˜์œผ๋‚˜ ์˜†์— ์žˆ๋˜ ๋ฒ„๋“œ๋‚˜๋ฌด ๊ป์งˆ์„ ๋ฒ—๊ฒจ ๋‹ค๋ฆฌ๋ฅผ ๋™์—ฌ๋งค๊ณ  ์‹œํ—˜์„ ๋๊นŒ์ง€ ์น˜๋ €๋‹ค. ํ•˜์ง€๋งŒ ๊ฒฐ๊ตญ ์‹œํ—˜์—์„œ๋Š” ๋‚™๋ฐฉํ•˜๊ณ  ๋งŒ๋‹ค. ์งˆ๋ฌธ: ์ด์ˆœ์‹ ์€ 28์‚ด์— ๋ฌด๊ณผ์— ํ•ฉ๊ฒฉํ•˜๋Š”๊ฐ€? ๋Œ€๋‹ต: """ outputs = pipeline(prompt, max_new_tokens=30, do_sample=True, top_k=50, return_full_text = False) generated_text = outputs[0]["generated_text"] print(generated_text) # ์•„๋‹ˆ์š”, 28์‚ด์— ๋ฌด๊ณผ์— ํ•ฉ๊ฒฉํ•˜์ง€ ๋ชปํ•˜์˜€๋‹ค. ``` - Chatbot style ``` messages = [{"role": "user", "content": "์ข‹์€ ์ทจ๋ฏธ๋ฅผ ๊ฐ€์ง€๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•˜๋‚˜์š”?"}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, return_full_text = False) generated_text = outputs[0]["generated_text"] print(generated_text) ``` ### For Development The support of GPU computing resource is required for the development and implementation of state-of-the-art models. I would appreciate if anyone could help. Email: [email protected]
Habana/albert-large-v2
Habana
"2024-02-20T23:53:51Z"
1,105
0
null
[ "optimum_habana", "license:apache-2.0", "region:us" ]
null
"2022-04-22T18:05:07Z"
--- license: apache-2.0 --- [Optimum Habana](https://github.com/huggingface/optimum-habana) is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy and fast model loading, training and inference on single- and multi-HPU settings for different downstream tasks. Learn more about how to take advantage of the power of Habana HPUs to train and deploy Transformers and Diffusers models at [hf.co/hardware/habana](https://huggingface.co/hardware/habana). ## ALBERT Large model HPU configuration This model only contains the `GaudiConfig` file for running the [albert-large-v2](https://huggingface.co/albert-large-v2) model on Habana's Gaudi processors (HPU). **This model contains no model weights, only a GaudiConfig.** This enables to specify: - `use_torch_autocast`: whether to use PyTorch's autocast mixed precision - `use_fused_adam`: whether to use Habana's custom AdamW implementation - `use_fused_clip_norm`: whether to use Habana's fused gradient norm clipping operator ## Usage The model is instantiated the same way as in the Transformers library. The only difference is that there are a few new training arguments specific to HPUs. [Here](https://github.com/huggingface/optimum-habana/blob/main/examples/question-answering/run_qa.py) is a question-answering example script to fine-tune a model on SQuAD. You can run it with ALBERT Large with the following command: ```bash python run_qa.py \ --model_name_or_path albert-large-v2 \ --gaudi_config_name Habana/albert-large-v2 \ --dataset_name squad \ --do_train \ --do_eval \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 4 \ --learning_rate 5e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/squad/ \ --use_habana \ --use_lazy_mode \ --throughput_warmup_steps 3 \ --bf16 ``` Check the [documentation](https://huggingface.co/docs/optimum/habana/index) out for more advanced usage and examples.
stablediffusionapi/realistic-vision-v13
stablediffusionapi
"2023-09-08T09:12:25Z"
1,105
10
diffusers
[ "diffusers", "stablediffusionapi.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-02-01T19:53:20Z"
--- license: creativeml-openrail-m tags: - stablediffusionapi.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # API Inference ![generated from stablediffusionapi.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/0-fb96db4e-524f-4f9f-af19-899e422a2c90.png) ## Get API Key Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed. Replace Key in below code, change **model_id** to "realistic-vision-v13" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs) Model link: [View model](https://stablediffusionapi.com/models/realistic-vision-v13) Credits: [View credits](https://civitai.com/?query=model_search) View all models: [View Models](https://stablediffusionapi.com/models) ```python import requests import json url = "https://stablediffusionapi.com/api/v4/dreambooth" payload = json.dumps({ "key": "Your_API_key", "model_id": "realistic-vision-v13", "prompt": "actual 8K portrait photo of gareth person, portrait, happy colors, bright eyes, clear eyes, warm smile, smooth soft skin, big dreamy eyes, beautiful intricate colored hair, symmetrical, anime wide eyes, soft lighting, detailed face, by makoto shinkai, stanley artgerm lau, wlop, rossdraws, concept art, digital painting, looking into camera", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) Use this coupon code to get 25% off DMGG0RBN
larryvrh/Yi-34B-200K-Llamafied
larryvrh
"2024-06-25T16:51:55Z"
1,105
14
transformers
[ "transformers", "safetensors", "llama", "text-generation", "zh", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-07T01:21:36Z"
--- language: - zh - en license: apache-2.0 --- Llamafied version of 01-ai's [Yi-34B-200k](https://huggingface.co/01-ai/Yi-34B-200K) for ease of use. ## Model Performance | Model | MMLU | CMMLU | C-Eval | GAOKAO | BBH | Common-sense Reasoning | Reading Comprehension | Math & Code | | :------------ | :------: | :------: | :------: | :------: | :------: | :--------------------: | :-------------------: | :---------: | | | 5-shot | 5-shot | 5-shot | 0-shot | 3-shot@1 | - | - | - | | LLaMA2-34B | 62.6 | - | - | - | 44.1 | 69.9 | 68.0 | 26.0 | | LLaMA2-70B | 68.9 | 53.3 | - | 49.8 | 51.2 | 71.9 | 69.4 | 36.8 | | Baichuan2-13B | 59.2 | 62.0 | 58.1 | 54.3 | 48.8 | 64.3 | 62.4 | 23.0 | | Qwen-14B | 66.3 | 71.0 | 72.1 | 62.5 | 53.4 | 73.3 | 72.5 | **39.8** | | Skywork-13B | 62.1 | 61.8 | 60.6 | 68.1 | 41.7 | 72.4 | 61.4 | 24.9 | | InternLM-20B | 62.1 | 59.0 | 58.8 | 45.5 | 52.5 | 78.3 | - | 30.4 | | Aquila-34B | 67.8 | 71.4 | 63.1 | - | - | - | - | - | | Falcon-180B | 70.4 | 58.0 | 57.8 | 59.0 | 54.0 | 77.3 | 68.8 | 34.0 | | Yi-6B | 63.2 | 75.5 | 72.0 | 72.2 | 42.8 | 72.3 | 68.7 | 19.8 | | Yi-6B-200K | 64.0 | 75.3 | 73.5 | 73.9 | 42.0 | 72.0 | 69.1 | 19.0 | | **Yi-34B** | **76.3** | **83.7** | 81.4 | 82.8 | **54.3** | **80.1** | 76.4 | 37.1 | | Yi-34B-200K | 76.1 | 83.6 | **81.9** | **83.4** | 52.7 | 79.7 | **76.6** | 36.3 | While benchmarking open-source models, we have observed a disparity between the results generated by our pipeline and those reported in public sources (e.g. OpenCompass). Upon conducting a more in-depth investigation of this difference, we have discovered that various models may employ different prompts, post-processing strategies, and sampling techniques, potentially resulting in significant variations in the outcomes. Our prompt and post-processing strategy remains consistent with the original benchmark, and greedy decoding is employed during evaluation without any post-processing for the generated content. For scores that were not reported by the original authors (including scores reported with different settings), we try to get results with our pipeline. To evaluate the model's capability extensively, we adopted the methodology outlined in Llama2. Specifically, we included PIQA, SIQA, HellaSwag, WinoGrande, ARC, OBQA, and CSQA to assess common sense reasoning. SquAD, QuAC, and BoolQ were incorporated to evaluate reading comprehension. CSQA was exclusively tested using a 7-shot setup, while all other tests were conducted with a 0-shot configuration. Additionally, we introduced GSM8K (8-shot@1), MATH (4-shot@1), HumanEval (0-shot@1), and MBPP (3-shot@1) under the category "Math & Code". Due to technical constraints, we did not test Falcon-180 on QuAC and OBQA; the score is derived by averaging the scores on the remaining tasks. Since the scores for these two tasks are generally lower than the average, we believe that Falcon-180B's performance was not underestimated. ## Usage Please visit our [github repository](https://github.com/01-ai/Yi) for general guidance on how to use this model. ## Disclaimer Although we use data compliance checking algorithms during the training process to ensure the compliance of the trained model to the best of our ability, due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model will generate correct and reasonable output in all scenarios. Please be aware that there is still a risk of the model producing problematic outputs. We will not be responsible for any risks and issues resulting from misuse, misguidance, illegal usage, and related misinformation, as well as any associated data security concerns. ## License The Yi series models are fully open for academic research and free commercial usage with permission via applications. All usage must adhere to the [Model License Agreement 2.0](https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE). To apply for the official commercial license, please contact us ([[email protected]](mailto:[email protected])).
athirdpath/Nethena-20b-Glued
athirdpath
"2023-11-14T09:19:59Z"
1,105
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-12T06:43:09Z"
--- license: cc-by-nc-4.0 --- This is [NeverSleep/Nethena-20B](https://huggingface.co/NeverSleep/Nethena-20B) with [athirdpath/Nethena-20b-Glue-LORA](https://huggingface.co/athirdpath/Nethena-20b-Glue-LORA) applied. athirdpath/Nethena-20b-Glue-LORA is a 128 rank LORA for RP, trained on a private dataset. It is unalligned and NSFW-oriented. This is a test, exploring the effects of "gluing" the components of the 20b model together to reduce the iconic word replacement errors, increase lucidity, and improve recall. ![image/png](https://huggingface.co/athirdpath/Nethena-20b-Glued/resolve/main/b5787896-afd5-44a3-b757-0e75ee28bed8.png) The private ~500k token dataset used to train the LORA was Alpaca formatted and focused on 4 primary categories: - Medical texts (on psychology, reproductive organs, anatomy, and pregnancy). These are formatted so the model, in character as a doctor or therapist, answers a patient's question in short to medium form. - Excerpts from short stories and novellas (erotic and romantic) centered around both realistic and fantastic situations, covering several fetishes as well. These are sliced into ~2048 token chunks, and these long-form responses are all tied to the command โ€œEnter narrator mode.โ€ in the instructions. - A selection from PIPPA, using a wide keyword search for tokens associated with low quality human or AI data to remove those responses, then a positive search was done for words and phrases associated with a higher reading level. These are converted to Alpaca with โ€œEnter RP mode.โ€ in all the instruction fields. - ~18k tokens of GPT-4 generated data on role-playing from various charactersโ€™ perspectives, focusing on different situations and emotions. Includes many multi-turn conversations. So far it is passing subjective testing with flying colors, objective numbers coming soon. Trained with Alpaca-style prompts.
Edentns/DataVortexM-7B-Instruct-v0.1
Edentns
"2024-02-24T14:19:16Z"
1,105
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-04T00:21:22Z"
--- tags: - text-generation license: cc-by-nc-sa-4.0 language: - ko base_model: mistralai/Mistral-7B-Instruct-v0.2 pipeline_tag: text-generation datasets: - beomi/KoAlpaca-v1.1a --- # **DataVortexM-7B-Instruct-v0.1** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### **Trained On** - **OS**: Ubuntu 20.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Dataset** - [beomi/KoAlpaca-v1.1a](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a) ### **Instruction format** It follows **Alpaca** format. E.g. ```python text = """\ ๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค. ### Instruction: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ? ### Response: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค. ### Instruction: ์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** On Benchmarking ... | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----: | -----: | ------: | ------: | | kobest_boolq | 0.0 | 0.0 | 0.0 | 0.0 | | kobest_copa | 0.0 | 0.0 | 0.0 | 0.0 | | kobest_hellaswag | 0.0 | 0.0 | 0.0 | 0.0 | | kobest_sentineg | 0.0 | 0.0 | 0.0 | 0.0 | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 39.81 | 34.13 | 42.35 | 38.73 | 45.46 | 38.37 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexM-7B-Instruct-v0.1") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexM-7B-Instruct-v0.1") messages = [ {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?"}, {"role": "assistant", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
kodonho/llama2-chat-koalpaca
kodonho
"2024-01-12T01:54:43Z"
1,105
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "dataset:beomi/KoAlpaca-v1.1a", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-06T11:00:56Z"
--- license: llama2 datasets: - beomi/KoAlpaca-v1.1a language: - ko --- # Llama2 based model with koalapaca dataset This is an English, Korean Model based on * [meta-llama/Llama-2-7b-chat-hf]
cocoirun/Yi-Ko-6B-instruct-v2.0
cocoirun
"2024-01-18T09:09:54Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-18T08:50:35Z"
--- license: cc-by-sa-4.0 --- <h1>instruct ๋ชจ๋ธ v2.0</h1> <b><ํ•™์Šต ๋ฐ์ดํ„ฐ ๊ตฌ์ถ•></b> Open-Orca-ko ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์„ํ•˜์—ฌ ํƒœ์Šคํฌ๋ฅผ ์ถ”์ถœํ•œ ๋’ค ํ•ด๋‹น ํƒœ์Šคํฌ์— ๋งž์ถฐ์„œ NLP ๊ด€๋ จ ์˜คํ”ˆ์†Œ์Šค ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šต๋ฐ์ดํ„ฐ๋ฅผ ์ž์ฒด์ ์œผ๋กœ ์•ฝ 4๋งŒ๊ฑด(์—ญ์‚ฌ, ๊ณผํ•™, ์ˆ˜ํ•™, ๊ธฐ๊ณ„๋…ํ•ด, ๋ฆฌ๋ทฐ ๋ถ„์„) ๊ตฌ์ถ•ํ•˜์˜€๊ณ , ๊ทธ ์™ธ์— Open-Orca-Ko์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์ผ๋ถ€ ํ•„ํ„ฐ๋งํ•˜์—ฌ ์ •์ œํ•ด๊ฑฐ๋‚˜ KoBEST ๋ฐ์ดํ„ฐ๋ฅผ ํ•จ๊ป˜ ์ถ”๊ฐ€ํ•˜์˜€์Šต๋‹ˆ๋‹ค. aihub ์ผ๋ฐ˜์ƒ์‹ ๋ฐ ๊ธฐ๊ณ„๋…ํ•ด ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์ถ”๊ฐ€๋กœ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ๊ตฌ์ถ•(ํ˜•ํƒœ์†Œ ๊ด€๋ จ, ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ๋ฐ ์š”์•ฝ) ๊ฐ์ข… ๋ธ”๋กœ๊ทธ์—์„œ ์—ญ์‚ฌ ๋ฐ ์ƒ์‹ ํ€ด์ฆˆ๋ฅผ ์‚ฌ๋žŒ์ด ์ง์ ‘ ํ•™์Šต๋ฐ์ดํ„ฐ ํ˜•ํƒœ๋กœ ๋ณ€๊ฒฝ AI2AI Challenge ๋ฐ์ดํ„ฐ๋ฅผ ํŒŒํŒŒ๊ณ ๋ฅผ ํ†ตํ•ด ๋ฒˆ์—ญ ๋ฐ ์˜ค์—ญ๋œ ๋ถ€๋ถ„์„ ์‚ฌ๋žŒ์ด ์ง์ ‘ ์ˆ˜์ • ํ•˜๋Š” ์ž‘์—…์„ ์ˆ˜ํ–‰ ์˜์–ด ๋ฒˆ์—ญ ๋ฐ์ดํ„ฐ ์˜ํ•œ/ํ•œ์˜ ๋ฐ์ดํ„ฐ ํ•™์Šต ๋ฐ์ดํ„ฐ๋กœ ํ™œ์šฉ ์ง„ํ–‰ ์ด 11๋งŒ๊ฐœ์˜ ํ•™์Šต๋ฐ์ดํ„ฐ๋กœ sft๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. <br> ํ˜„์žฌ, ์ƒˆ๋กœ์šด ๋ฒ„์ „์˜ ๋ชจ๋ธ ํ•™์Šต ๋ฐ ์„ฑ๋Šฅ์„ ์œ„ํ•ด Open-Orca ๋ฐ์ดํ„ฐ์…‹ ์ผ๋ถ€๋ฅผ ๋ฒˆ์—ญํ•˜์—ฌ ์ •์ œ ์ค‘์— ์žˆ์Šต๋‹ˆ๋‹ค. <br> + ๊ณ ๋“ฑํ•™๊ต ์—ญ์‚ฌ ๋ฌธ์ œ ๋ฐ TruthfulQA ๊ด€๋ จ ๋ฌธ์ œ ์ถ”๊ฐ€๋ฅผ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค. + ๊ฐ์ข… it ์ง€์‹ ๋ฐ์ดํ„ฐ ์ถ”๊ฐ€์ง„ํ–‰. + ๊ธฐ๊ณ„๋…ํ•ด ๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ChatGPT๋ฅผ ํ†ตํ•ด์„œ ๋‹ต๋ณ€์„ ์–ป์–ด ํ•™์Šต + ๋ฌธ๋ฒ•๊ด€๋ จ ํ•™์Šต ๋ฐ์ดํ„ฐ <br> ###ํ•™์Šต ๋ฐ์ดํ„ฐ ํŒŒ์ผ์€ ๋น„๊ณต๊ฐœ์ž…๋‹ˆ๋‹ค. <br> <b><ํ•™์Šต></b> ํ•™์Šต์€ LoRA๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ A100 40G *2์—์„œ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
DopeorNope/Mistralopithecus-v0.1-10.8B
DopeorNope
"2024-01-20T03:03:26Z"
1,105
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-19T17:05:44Z"
--- license: cc-by-nc-sa-4.0 --- ## Model Details **Model Developers** Seungyoo Lee (DopeorNope) ์ด ๋ชจ๋ธ์€ Mistral Base์˜ ์ƒˆ๋กœ์šด ์•„ํ‚คํ…์ณ์ด๋ฉฐ, 10.7B์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋กœ ๊ตฌ์„ฑ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. (Solar๋‚˜, ์‹œ๋‚˜ํŠธ๋ผ ๋ฒ ์ด์Šค ๋ชจ๋ธ์ด ์•„๋‹™๋‹ˆ๋‹ค.) ์•ฝ 1.5B์˜ ํ† ํฐ์œผ๋กœ pretrain ๋˜์—ˆ์œผ๋‚˜, ์‹คํ—˜๋‹จ๊ณ„๋กœ ํ–ฅํ›„ ๋‹ค์‹œ ํ›ˆ๋ จ๋˜์–ด ์ƒˆ๋กญ๊ฒŒ ๋‚˜์˜ฌ ์˜ˆ์ •์ž…๋‹ˆ๋‹ค. ํ…Œ์ŠคํŠธ์šฉ์œผ๋กœ ์˜ฌ๋ ค๋ด…๋‹ˆ๋‹ค. Context length๊ฐ€ 32k ๊นŒ์ง€์ง€์› ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์ด๋ฉฐ, ํ–ฅํ›„ ๋” ์™„๋ฒฝํ•˜๊ฒŒ ์„ค๊ณ„ํ•˜์—ฌ ์˜ฌ๋ฆฌ๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.
GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15
GAI-LLM
"2024-01-23T07:01:56Z"
1,105
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-23T02:15:59Z"
--- license: cc-by-nc-4.0 language: - ko library_name: transformers pipeline_tag: text-generation --- **The license is `cc-by-nc-4.0`.** # **GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15** ## Model Details **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) **Training Dataset** - We combined Open Korean Dateset using mixed-strategy - We use A100 GPU 80GB * 8, when training. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). # Implementation Code ```python ### GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15 from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "GAI-LLM/OPEN-SOLAR-KO-10.7B-mixed-v15" model = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) ```
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v1.0
ITT-AF
"2024-02-14T06:34:36Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-25T16:25:47Z"
--- license: cc-by-nc-4.0 --- # ITT-AF/ITT-42dot_LLM-PLM-1.3B-v1.0 This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
Edentns/DataVortexS-10.7B-dpo-v1.4
Edentns
"2024-02-24T14:17:27Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:yanolja/Bookworm-10.7B-v0.4-DPO", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-26T00:47:24Z"
--- tags: - text-generation license: cc-by-nc-4.0 language: - ko base_model: yanolja/Bookworm-10.7B-v0.4-DPO pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.4** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [yanolja/Bookworm-10.7B-v0.4-DPO](https://huggingface.co/yanolja/Bookworm-10.7B-v0.4-DPO) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **ChatML** format. E.g. ```python text = """\ <|im_start|>system ๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค.<|im_end|> <|im_start|>user ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?<|im_end|> <|im_start|>assistant ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค.<|im_end|> <|im_start|>user ์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?<|im_end|> <|im_start|>assistant """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | -----------: | -----------: | -----------: | | kobest_boolq | 0.757911 | 0.907177 | 0.924496 | 0.605075 | | kobest_copa | 0.740605 | 0.801886 | 0.831886 | 0.849978 | | kobest_hellaswag | 0.445176 | 0.454788 | 0.468654 | 0.45218 | | kobest_sentineg | 0.415445 | 0.95214 | 0.962217 | 0.967254 | | **Average** | **0.589784** | **0.778998** | **0.796813** | **0.718622** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 53.81 | 52.05 | 62.93 | 53.59 | 50.42 | 50.06 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.4") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.4") messages = [ {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?"}, {"role": "assistant", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
Edentns/DataVortexS-10.7B-dpo-v1.10
Edentns
"2024-02-24T14:16:33Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:beomi/OPEN-SOLAR-KO-10.7B", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-01T07:02:25Z"
--- tags: - text-generation license: cc-by-nc-4.0 language: - ko base_model: beomi/OPEN-SOLAR-KO-10.7B pipeline_tag: text-generation --- # **DataVortexS-10.7B-dpo-v1.10** <img src="./DataVortex.png" alt="DataVortex" style="height: 8em;"> ## Our Team | Research & Engineering | Product Management | | :--------------------: | :----------------: | | Kwangseok Yang | Seunghyun Choi | | Jeongwon Choi | Hyoseok Choi | ## **Model Details** ### **Base Model** [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B) ### **Trained On** - **OS**: Ubuntu 22.04 - **GPU**: H100 80GB 4ea - **transformers**: v4.36.2 ### **Instruction format** It follows **Alpaca (Chat)** format. E.g. ```python text = """\ ### System: ๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค. ### User: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ? ### Assistant: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค. ### User: ์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ? """ ``` ## **Model Benchmark** ### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)** | Task | 0-shot | 5-shot | 10-shot | 50-shot | | :--------------- | -----------: | -----------: | -----------: | -----------: | | kobest_boolq | 0.874261 | 0.897165 | 0.904985 | 0.907857 | | kobest_copa | 0.807479 | 0.845701 | 0.860809 | 0.8719 | | kobest_hellaswag | 0.504865 | 0.502074 | 0.50717 | 0.51609 | | kobest_sentineg | 0.409404 | 0.967251 | 0.992443 | 0.982367 | | **Average** | **0.649002** | **0.803048** | **0.816352** | **0.819553** | ### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)** | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | ------: | -----: | -----------: | ------: | ------------: | --------------: | | 56.32 | 54.27 | 63.16 | 49.95 | 55.08 | 59.15 | ## **Implementation Code** This model contains the chat_template instruction format. You can use the code below. ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.10") tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.10") messages = [ {"role": "system", "content": "๋‹น์‹ ์€ ์‚ฌ๋žŒ๋“ค์ด ์ •๋ณด๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ๋„๋ก ๋„์™€์ฃผ๋Š” ์ธ๊ณต์ง€๋Šฅ ๋น„์„œ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์–ด๋””์•ผ?"}, {"role": "assistant", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค."}, {"role": "user", "content": "์„œ์šธ ์ธ๊ตฌ๋Š” ์ด ๋ช‡ ๋ช…์ด์•ผ?"} ] encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` ## **License** This model is licensed under the [cc-by-nc-4.0](https://creativecommons.org/licenses/by-nc/4.0/). which allows others to share and adapt the model for non-commercial purposes. <div align="center"> <a href="https://edentns.com/"> <img src="./Logo.png" alt="Logo" style="height: 3em;"> </a> </div>
mu0gum/AIFT-polyglot-ko-1.3b-ao-instruct-v0.91
mu0gum
"2024-02-14T06:28:51Z"
1,105
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-08T17:09:40Z"
--- license: cc-by-nc-4.0 --- # AIFT-polyglot-ko-1.3b-ao-instruct-v0.91 ๋ฒ ์ด์Šค ๋ชจ๋ธ : EleutherAI/polyglot-ko-1.3b ํ•™์Šต ๋ฐ์ดํ„ฐ : ์ž์ฒด ์ œ์ž‘ํ•œ Open Orca ์Šคํƒ€์ผ ๋ฐ์ดํ„ฐ์…‹ ์•ฝ 48,000๊ฑด (์ค‘๋ณต ์ œ๊ฑฐ ๋ฐ ๋ฐ์ดํ„ฐ ๋ถ„ํฌ ์กฐ์ •) ํ•™์Šต ๋ฐฉ๋ฒ• : Full finetuning epoch : 3 ## ko-lm-evaluation-harness(5-shot) |kobest_boolq|kobest_copa|kobest_hellaswag|pawsx_ko| |--|--|--|--| |0.5398860398860399|0.71|0.436|0.476| ## Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.0.0 - Tokenizers 0.15.0
c1park/kosolra-kullm-LDCC-merge
c1park
"2024-03-05T05:19:30Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "ko", "en", "dataset:nlpai-lab/kullm-v2", "base_model:LDCC/LDCC-SOLAR-10.7B", "base_model:realPCH/kosolra-kullm", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-21T23:30:22Z"
--- license: mit datasets: - nlpai-lab/kullm-v2 base_model: - LDCC/LDCC-SOLAR-10.7B - realPCH/kosolra-kullm language: - ko - en tags: - merge - mergekit --- ### Model Strategy We merged following two model with SLERP method using [mergekit](https://github.com/arcee-ai/mergekit) library. - [realPCH/kosolra-kullm](https://huggingface.co/realPCH/kosolra-kullm) - [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) which are both based on [yanolja/KoSOLAR-10.7B-v0.1](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.1-deprecated). ### Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "c1park/kosolra-kullm-LDCC-merge" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "[INST] Put instruction here. [/INST]" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_DPOv3
ENERGY-DRINK-LOVE
"2024-03-16T08:19:00Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-07T09:49:29Z"
--- license: cc-by-nc-4.0 base_model: ENERGY-DRINK-LOVE/leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT tags: - trl - dpo - generated_from_trainer model-index: - name: nhn_dpo_v3_leaderboard_inst_v1.3_Open-Hermes_LDCC-SOLAR-10.7B_SFT_DPO results: [] ---
hwkwon/S-SOLAR-10.7B-v1.3
hwkwon
"2024-03-08T12:10:26Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-08T11:54:14Z"
--- license: cc-by-nc-4.0 language: - ko --- # S-SOLAR-10.7B <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0). ### Trained Data TBA ### Prompt Template ``` ### User: User query input ### Assistant: ``` ### License This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
jylee420/gemma-2b-data-std-v0
jylee420
"2024-03-13T01:45:03Z"
1,105
0
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T01:40:51Z"
--- library_name: transformers tags: [] license: "other" --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
freewheelin/free-solar-slerp-v0.3
freewheelin
"2024-03-14T14:18:25Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:davidkim205/komt-solar-10.7b-sft-v5", "base_model:colable/LDCC-CCK-slerp", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-13T01:47:30Z"
--- base_model: - davidkim205/komt-solar-10.7b-sft-v5 - colable/LDCC-CCK-slerp library_name: transformers tags: - mergekit - merge license: mit --- # free-solar-0.3 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5) * [colable/LDCC-CCK-slerp](https://huggingface.co/davidkim205/colable/LDCC-CCK-slerp)
freewheelin/free-solar-instrunction-v0.3
freewheelin
"2024-03-15T02:32:32Z"
1,105
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "en", "arxiv:2312.15166", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-15T02:03:42Z"
--- language: - ko - en license: mit --- # Model Card for free-solar-instruction-v0.3 ## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team ## Hardware and Software * **Training Factors**: We fine-tuned this model using the [HuggingFace TRL Trainer](https://huggingface.co/docs/trl/trainer) ## Method - This model was trained using the learning method introduced in the [SOLAR paper](https://arxiv.org/pdf/2312.15166.pdf). ## Base Model - [davidkim205/komt-solar-10.7b-sft-v5](https://huggingface.co/davidkim205/komt-solar-10.7b-sft-v5)
Heoni/Aguie-chat-proto
Heoni
"2024-04-04T12:13:55Z"
1,105
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ko", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-04T10:01:50Z"
--- license: cc-by-nc-nd-4.0 language: - ko - en --- ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์†์€ ๋ˆˆ๋ณด๋‹ค ๋น ๋ฅด๋‹ค! ๋ฌด์Šจ ํŒจ๋ฅผ ์žก๊ณ  ์‹ถ๋‹ˆ? ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ๋ˆ์„ ๋ฒŒ๊ณ  ์‹ถ๋‹ˆ? ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ์•„์ˆ˜๋ผ๋ฐœ๋ฐœํƒ€ ๋ถ€์ž๊ฐ€๋˜๊ณ  ์‹ถ๋‹ˆ? ๋ถ€์ž๊ฐ€๋˜๊ณ  ์‹ถ์–ด? ํ™”ํˆฌํ•˜๋ฉด ๋Œ€ํ•œ๋ฏผ๊ตญ์— ๋”ฑ ์„ธ ๋ช…์ด์•ผ. ๊ฒฝ์ƒ๋„์— ์ง๊ท€, ์ „๋ผ๋„์— ์•„๊ท€, ๊ธฐ์นด๊ณ  ์ „๊ตญ์ ์œผ๋กœ ๋‚˜! ์˜ˆ์ „์— ์ง๊ท€๋ž‘ ์•„๊ท€๊ฐ€ ํ•œํŒ ๋ถ™์—ˆ๋Š”๋ฐ, ์•„๊ท€๊ฐ€ ์ง๊ท€์˜ ๊ท€๋ฅผ ์งค๋ผ ๋ฒ„๋ ธ์–ด. ๊ธฐ๋ž˜์„œ ์ง๊ท€์•ผ # Aguie-chat-proto <!-- Provide a quick summary of what the model is/does. --> <!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).--> ### Model Description <!-- Provide a longer summary of what this model is. --> This model is a fine-tuned version of LLama2-koen-13b ### Trained Data - 3,000,000 korean instruction following set ### Prompt Template ``` ### Instruction: ### Output: ``` ### License This model is licensed under the cc-by-nc-nd-4.0.
QuantumIntelligence/QI-neural-chat-7B-ko-DPO
QuantumIntelligence
"2024-04-11T23:36:21Z"
1,105
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Korean", "LLM", "Chatbot", "DPO", "Intel/neural-chat-7b-v3-3", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-11T13:21:56Z"
--- license: apache-2.0 library_name: transformers tags: - Korean - LLM - Chatbot - DPO - Intel/neural-chat-7b-v3-3 --- # QI-neural-chat-7B-ko-DPO This is a fine tuned model based on the [neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3) with Korean DPO dataset([Oraca-DPO-Pairs-KO](https://huggingface.co/datasets/Ja-ck/Orca-DPO-Pairs-KO)). It processes Korean language relatively well, so it is useful when creating various applications. ### Basic Usage ``` from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig import transformers import torch model_id = "QuantumIntelligence/QI-neural-chat-7B-ko-DPO" tokenizer = AutoTokenizer.from_pretrained(model_id) # model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", load_in_8bit=True) # quantization pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", tokenizer=tokenizer, ) prompt = """Classify the text into neutral, negative or positive. Text: This movie is definitely one of my favorite movies of its kind. The interaction between respectable and morally strong characters is an ode to chivalry and the honor code amongst thieves and policemen. Sentiment: """ outputs = pipeline(prompt, max_new_tokens=6) print(outputs[0]["generated_text"]) ``` ### Using Korean - Sentiment ``` prompt = """ ๋‹ค์Œ ํ…์ŠคํŠธ๋ฅผ ์ค‘๋ฆฝ, ๋ถ€์ •, ๊ธ์ •์œผ๋กœ ๋ถ„๋ฅ˜ํ•ด์ค˜. ํ…์ŠคํŠธ: ํ•˜๋Š˜์„ ๋ณด๋‹ˆ ๋น„๊ฐ€ ์˜ฌ๋“ฏ ํ•˜๋‹ค. ์šฐ์šธํ•œ ๊ธฐ๋ถ„์ด ๋“ค์–ด์„œ ์ˆ ์„ ํ•œ์ž” ํ• ๊นŒ ๊ณ ๋ฏผ์ค‘์ธ๋ฐ ๊ฐ™์ด ๋งˆ์‹ค ์‚ฌ๋žŒ์ด ์—†๋‹ค. ๋ถ„๋ฅ˜: """ outputs = pipeline(prompt, max_new_tokens=6) print(outputs[0]["generated_text"]) ``` - Summarization ``` prompt = """ ๊ตญ๋‚ด ์—ฐ๊ตฌ์ง„์ด ๋ฏธ๊ตญ, ์˜๊ตญ ๊ณต๋™ ์—ฐ๊ตฌํŒ€๊ณผ ์ฒญ๊ฐ ๊ธฐ๋Šฅ์— ๊ด€์—ฌํ•˜๋Š” ๋‹จ๋ฐฑ์งˆ ๊ตฌ์กฐ๋ฅผ ๊ทœ๋ช…ํ–ˆ๋‹ค. ๋‚œ์ฒญ ์น˜๋ฃŒ๋ฒ•์„ ๊ฐœ๋ฐœํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋  ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹ค. ํฌ์Šคํ…์€ ์กฐ์œค์ œ ์ƒ๋ช…๊ณผํ•™๊ณผ ๊ต์ˆ˜ ์—ฐ๊ตฌํŒ€์ด ๊น€๊ด‘ํ‘œ ๊ฒฝํฌ๋Œ€ ์‘์šฉํ™”ํ•™๊ณผ ๊ต์ˆ˜ ์—ฐ๊ตฌํŒ€, ๋ธŒ์…ฐ๋ณผ๋กœ๋“œ ์นดํŠธ๋ฆฌ์น˜ ๋ฏธ๊ตญ ์„œ๋˜ ์บ˜๋ฆฌํฌ๋‹ˆ์•„๋Œ€ ๊ต์ˆ˜ ์—ฐ๊ตฌํŒ€, ์บ๋กค ๋กœ๋นˆ์Šจ ์˜๊ตญ ์˜ฅ์Šคํผ๋“œ๋Œ€ ๊ต์ˆ˜์™€ ํ•จ๊ป˜ ์ฒญ๊ฐ ๊ด€๋ จ ํŠน์ • ์ˆ˜์šฉ์ฒด ๋‹จ๋ฐฑ์งˆ ๊ตฌ์กฐ์™€ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ๋ฐํžˆ๋Š” ๋ฐ ์„ฑ๊ณตํ–ˆ๋‹ค๊ณ  11์ผ ๋ฐํ˜”๋‹ค. ๊ท€ ์•ˆ์ชฝ์—๋Š” ์†Œ๋ฆฌ๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋‹ฌํŒฝ์ด๊ด€๊ณผ ํ‰ํ˜•๊ฐ๊ฐ์„ ๋‹ด๋‹นํ•˜๋Š” ์ „์ •๊ธฐ๊ด€์ด ์žˆ๋‹ค. ์ด ๊ธฐ๊ด€๋“ค์˜ ์„ธํฌ๋“ค์€ ์ˆ˜์šฉ์ฒด ๋‹จ๋ฐฑ์งˆ์ธ โ€˜GPR156โ€™์„ ๊ฐ–๊ณ  ์žˆ๋‹ค. GPR156์ด ํ™œ์„ฑํ™”๋˜๋ฉด ์„ธํฌ ๋‚ด G๋‹จ๋ฐฑ์งˆ๊ณผ ๊ฒฐํ•ฉํ•ด ์‹ ํ˜ธ๋ฅผ ์ „๋‹ฌํ•œ๋‹ค. G๋‹จ๋ฐฑ์งˆ์€ โ€˜๊ตฌ์•„๋‹Œ ๋‰ดํด๋ ˆ์˜คํƒ€์ด๋“œ-๊ฒฐํ•ฉ ๋‹จ๋ฐฑ์งˆโ€™๋กœ ์‹ ํ˜ธ๋ฅผ ์ „๋‹ฌํ•˜๋Š” ์ค‘๊ฐœ์ž๋‹ค. GPR156์€ ๋‹ค๋ฅธ ์ˆ˜์šฉ์ฒด์™€ ๋‹ฌ๋ฆฌ ํŠน๋ณ„ํ•œ ์ž๊ทน์ด ์—†์–ด๋„ ํ•ญ์ƒ ๋†’์€ ํ™œ์„ฑ์„ ์œ ์ง€ํ•˜๋ฉฐ ์ฒญ๊ฐ๊ณผ ํ‰ํ˜• ๊ธฐ๋Šฅ ์œ ์ง€์— ํฐ ์—ญํ• ์„ ํ•œ๋‹ค. ์„ ์ฒœ์ ์œผ๋กœ ์ฒญ๊ฐ ์žฅ์• ๊ฐ€ ์žˆ๋Š” ํ™˜์ž๋“ค์„ ์น˜๋ฃŒํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ด ๋‹จ๋ฐฑ์งˆ์˜ ๊ตฌ์กฐ์™€ ์ž‘์šฉ ๋ฉ”์ปค๋‹ˆ์ฆ˜์„ ์•Œ์•„์•ผ ํ•œ๋‹ค. ์—ฐ๊ตฌํŒ€์€ ์ดˆ์ €์˜จ์ „์žํ˜„๋ฏธ๊ฒฝ(Cryo-EM) ๋ถ„์„๋ฒ•์„ ์‚ฌ์šฉํ•ด GPR156๊ณผ GPR156-G๋‹จ๋ฐฑ์งˆ ๊ฒฐํ•ฉ ๋ณตํ•ฉ์ฒด๋ฅผ ๊ณ ํ•ด์ƒ๋„๋กœ ๊ด€์ฐฐํ–ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์ˆ˜์šฉ์ฒด๋ฅผ ํ™œ์„ฑํ™”ํ•˜๋Š” ์ž‘์šฉ์ œ ์—†์ด๋„ GPR156์ด ๋†’์€ ํ™œ์„ฑ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋Š” ์›์ธ์„ ์ฐพ์•˜๋‹ค. GPR156์€ ์„ธํฌ๋ง‰์— ํ’๋ถ€ํ•œ ์ธ์ง€์งˆ๊ณผ ๊ฒฐํ•ฉํ•ด ํ™œ์„ฑํ™”๋๋‹ค. ์„ธํฌ์งˆ์— ์žˆ๋Š” G๋‹จ๋ฐฑ์งˆ๊ณผ์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ํ†ตํ•ด ์ž์ฒด์ ์œผ๋กœ ๊ตฌ์กฐ๋ฅผ ๋ณ€ํ˜•, ๋†’์€ ํ™œ์„ฑ์„ ์œ ์ง€ํ•œ๋‹ค๋Š” ์‚ฌ์‹ค๋„ ํ™•์ธ๋๋‹ค. ๊ธฐ์กด์— ์•Œ๋ ค์ง„ ์ˆ˜์šฉ์ฒด ๋‹จ๋ฐฑ์งˆ๋“ค๊ณผ ๋‹ฌ๋ฆฌ GPR156์€ ์„ธํฌ๋ง‰์„ ํ†ต๊ณผํ•˜๋Š” 7๋ฒˆ์งธ ํž๋ฆญ์Šค ๋ง๋‹จ ๋ถ€๋ถ„์˜ ๊ตฌ์กฐ๋ฅผ ์œ ์—ฐํ•˜๊ฒŒ ๋ฐ”๊พธ๋ฉฐ G๋‹จ๋ฐฑ์งˆ๊ณผ์˜ ๊ฒฐํ•ฉ์„ ์œ ๋„ํ–ˆ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์‹ ํ˜ธ๋ฅผ ํ™œ์„ฑํ™”ํ•จ์œผ๋กœ์จ ์†Œ๋ฆฌ๋ฅผ ๊ฐ์ง€ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ฃผ์—ˆ๋‹ค. ์กฐ ๊ต์ˆ˜๋Š” โ€œ์„ ์ฒœ์ ์œผ๋กœ ๋‚œ์ฒญ๊ณผ ๊ท ํ˜• ๊ฐ๊ฐ ๊ธฐ๋Šฅ์— ์žฅ์• ๊ฐ€ ์žˆ๋Š” ํ™˜์ž๋“ค์ด ๋งŽ๋‹คโ€๋ฉฐ โ€œ์ด๋“ค์„ ์œ„ํ•œ ํš๊ธฐ์ ์ธ ์น˜๋ฃŒ๋ฒ•๊ณผ ์•ฝ๋ฌผ ๊ฐœ๋ฐœ์— ์ด๋ฒˆ ์—ฐ๊ตฌ๊ฐ€ ํฐ ๋„์›€์ด ๋˜๊ธธ ๋ฐ”๋ž€๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค. ์—ฐ๊ตฌ ๋…ผ๋ฌธ์€ ๊ตญ์ œํ•™์ˆ ์ง€ โ€˜๋„ค์ด์ฒ˜ ๊ตฌ์กฐ&๋ถ„์ž ์ƒ๋ฌผํ•™โ€™ ์˜จ๋ผ์ธํŒ์— ์ตœ๊ทผ ๊ฒŒ์žฌ๋๋‹ค. ์œ„ ๋ฌธ์žฅ์„ ํ•œ๊ธ€๋กœ 100์ž๋‚ด๋กœ ์š”์•ฝํ•ด์ค˜. ์š”์•ฝ: """ outputs = pipeline(prompt, max_new_tokens=256, return_full_text = False, pad_token_id=tokenizer.eos_token_id)&& print(outputs[0]["generated_text"]) ``` - Question answering ``` prompt = """ ์ฐธ๊ฐ€์ž๋“ค์€ ๋จผ์ € fMRI ๊ธฐ๊ธฐ ์•ˆ์—์„œ ์ž์‹ ์˜ ์ด์•ผ๊ธฐ๋ฅผ ์ฝ๋Š” ๋™์•ˆ ๋‡Œ์˜ ํ™œ๋™ ํŒจํ„ด์„ ๊ธฐ๋กํ–ˆ๋‹ค. ์ด์•ผ๊ธฐ๋ฅผ ๋‹ค์‹œ ์ฝ์œผ๋ฉด์„œ๋Š” ์ด์•ผ๊ธฐ ์† ๋‹จ์–ด์— ๋Œ€ํ•ด ์ˆœ๊ฐ„์ˆœ๊ฐ„ ์ž์‹ ์ด ๋А๋ผ๋Š” ์ž๊ธฐ ๊ด€๋ จ๋„, ๊ธยท๋ถ€์ • ์ •์„œ๋ฅผ ๋ณด๊ณ ํ–ˆ๋‹ค. ์ˆ˜์ง‘๋œ 49๋ช…์˜ ๋ฐ์ดํ„ฐ๋Š” ์ž๊ธฐ ๊ด€๋ จ๋„์™€ ๊ธยท๋ถ€์ • ์ •์„œ ์ ์ˆ˜์— ๋”ฐ๋ผ ๋‹ค์„ฏ ๊ฐœ ์ˆ˜์ค€์œผ๋กœ ๋ถ„๋ฅ˜๋๋‹ค. ์งˆ๋ฌธ: ์‹คํ—˜์˜ ๋Œ€์ƒ์ด ๋œ ์‚ฌ๋žŒ์€ ๋ช‡ ๋ช…์ธ๊ฐ€? ํ•œ๊ธ€๋กœ ๋Œ€๋‹ต. ๋Œ€๋‹ต: """ outputs = pipeline(prompt, max_new_tokens=30, return_full_text = False) generated_text = outputs[0]["generated_text"] print(generated_text) ``` - Reasoning ``` prompt = """ ๊ฐ ๋ฐฉ์— ๊ณต์ด 5๊ฐœ ์žˆ๊ณ , ๋ฐฉ์˜ ์ด ๊ฐœ์ˆ˜๋Š” 4. ์ด ๊ณต์˜ ๊ฐฏ์ˆ˜๋Š” ๋ช‡๊ฐœ ์ธ๊ฐ€? """ outputs = pipeline(prompt, max_new_tokens=40, return_full_text = False, pad_token_id=tokenizer.eos_token_id) print(outputs[0]["generated_text"]) ``` - Chatbot template ``` messages = [{"role": "user", "content": "์ข‹์€ ์ทจ๋ฏธ๋ฅผ ๊ฐ€์ง€๋ ค๋ฉด ์–ด๋–ป๊ฒŒ ํ•˜๋‚˜์š”?"}] prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, return_full_text = False) generated_text = outputs[0]["generated_text"] print(generated_text) ``` ### Request The support of GPU computing resource is required for the development and implementation of state-of-the-art models. I would appreciate if anyone could help. Email: [email protected]
nourheshamshaheen/llava_8epochs_2000steps_4clients
nourheshamshaheen
"2024-07-01T16:34:27Z"
1,105
0
transformers
[ "transformers", "safetensors", "llava_llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-07-01T15:09:13Z"
Entry not found
Writer/palmyra-3B
Writer
"2024-02-28T09:41:37Z"
1,104
8
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "text generation", "causal-lm", "Writer-data", "NeMo", "palmyra", "en", "dataset:English", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-03-01T02:22:46Z"
--- language: - en datasets: - English tags: - text generation - pytorch - causal-lm - Writer-data - NeMo - palmyra pipeline_tag: text-generation library_name: transformers license: apache-2.0 --- # Palmyra 3B <style> img { display: inline; } </style> |[![Model architecture](https://img.shields.io/badge/Model%20Arch-Transformer%20Decoder-green)](#model-architecture)|[![Model size](https://img.shields.io/badge/Params-3b-green)](#model-architecture)|[![Language](https://img.shields.io/badge/Language-en--US-lightgrey#model-badge)](#datasets) ## Model Description Palmyra 3B was primarily pre-trained with English text. Note that there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra 3B is a member of the same family of models that only contain a decoder. As a result, it was pre-trained utilizing the objective of self-supervised causal language modeling. Palmyra 3B uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation per GPT-3. ## Use case Palmyra 3B is the fastest of Writerโ€™s LLMs and can perform important tasks such as text parsing, simple classification, address correction, and keyword recognition. Providing more context drives even better performance. ## Training data Palmyra 3B was trained on Writerโ€™s custom dataset. ## Intended Use and Limitations Palmyra 3B learns an inner representation of the English language that can be used to extract features useful for downstream tasks. However, the model is best at what it was pre-trained for which is generating text from a prompt. ### How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("Writer/palmyra-3B") tokenizer = AutoTokenizer.from_pretrained("Writer/palmyra-3B") ``` ### Limitations and Biases Palmyra 3B core functionality is to take a string of text and predict the next token. While language models are widely used for other tasks, there are many unknowns in this work. When prompting Palmyra, keep in mind that the next statistically likely token is not always the token that produces the most "accurate" text. Never rely on Palmyra 3B to produce factually correct results. Palmyra 3B was trained on Writerโ€™s custom data. As with all language models, it is difficult to predict how Palmyra 3B will respond to specific prompts, and offensive content may appear unexpectedly. We recommend that the outputs be curated or filtered by humans before they are released, both to censor undesirable content and to improve the quality of the results. ## Citation and Related Information To cite this model: ``` @misc{Palmyra, author = {Writer Engineering Team}, title = {{Palmyra 3B Parameter Autoregressive Language Model}}, howpublished = {\url{https://dev.writer.com}}, year = 2023, month = March } ```
TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ
TheBloke
"2023-10-03T21:14:26Z"
1,104
36
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2304.07327", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-04-29T13:51:12Z"
--- inference: false license: other model_type: llama --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # OpenAssistant LLaMA 30B SFT 7 GPTQ These files are GPTQ model files for [OpenAssistant LLaMA 30B SFT 7](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor). Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These models were quantised using hardware kindly provided by [Latitude.sh](https://www.latitude.sh/accelerate). ## Repositories available * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-HF) ## Prompt template: OpenAssistant ``` <|prompter|>{prompt}<|endoftext|><|assistant|> ``` ## Provided files Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements. Each separate quant is in a different branch. See below for instructions on fetching from different branches. | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description | | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- | | main | 4 | None | True | 16.94 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. | | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. | | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. | | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. | | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. | | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. | | gptq-3bit--1g-actorder_True | 3 | None | True | 12.92 GB | False | AutoGPTQ | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. | | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. | ## How to download from branches - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ:gptq-4bit-32g-actorder_True` - With Git, you can clone a branch with: ``` git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ` ``` - In Python Transformers code, the branch is the `revision` parameter; see below. ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui). Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ`. - To download from a specific branch, enter for example `TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ:gptq-4bit-32g-actorder_True` - see Provided Files above for the list of branches for each option. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done" 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `OpenAssistant-SFT-7-Llama-30B-GPTQ` 7. The model will automatically load, and is now ready for use! 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`. 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started! ## How to use this GPTQ model from Python code First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: `GITHUB_ACTIONS=true pip install auto-gptq` Then try the following example code: ```python from transformers import AutoTokenizer, pipeline, logging from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig model_name_or_path = "TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ" model_basename = "OpenAssistant-SFT-7-Llama-30B-GPTQ-4bit--1g.act.order" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, model_basename=model_basename use_safetensors=True, trust_remote_code=False, device="cuda:0", use_triton=use_triton, quantize_config=None) """ To download from a specific branch, use the revision parameter, as in this example: model = AutoGPTQForCausalLM.from_quantized(model_name_or_path, revision="gptq-4bit-32g-actorder_True", model_basename=model_basename, use_safetensors=True, trust_remote_code=False, device="cuda:0", quantize_config=None) """ prompt = "Tell me about AI" prompt_template=f'''<|prompter|>{prompt}<|endoftext|><|assistant|> ''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline # Prevent printing spurious transformers error when using pipeline with AutoGPTQ logging.set_verbosity(logging.CRITICAL) print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Compatibility The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork. ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility. <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, ้˜ฟๆ˜Ž, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieล‚, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: OpenAssistant LLaMA 30B SFT 7 # OpenAssistant LLaMA 30B SFT 7 Due to the license attached to LLaMA models by Meta AI it is not possible to directly distribute LLaMA-based models. Instead we provide XOR weights for the OA models. Thanks to Mick for writing the `xor_codec.py` script which enables this process ## The Process Note: This process applies to `oasst-sft-7-llama-30b` model. The same process can be applied to other models in future, but the checksums will be different.. **This process is tested only on Linux (specifically Ubuntu). Some users have reported that the process does not work on Windows. We recommend using WSL if you only have a Windows machine.** To use OpenAssistant LLaMA-Based Models, you should have a copy of the original LLaMA model weights and add them to a `llama` subdirectory here. If you cannot obtain the original LLaMA, see the note in italic below for a possible alternative. Ensure your LLaMA 30B checkpoint matches the correct md5sums: ``` f856e9d99c30855d6ead4d00cc3a5573 consolidated.00.pth d9dbfbea61309dc1e087f5081e98331a consolidated.01.pth 2b2bed47912ceb828c0a37aac4b99073 consolidated.02.pth ea0405cdb5bc638fee12de614f729ebc consolidated.03.pth 4babdbd05b8923226a9e9622492054b6 params.json ``` *If you do not have a copy of the original LLaMA weights and cannot obtain one, you may still be able to complete this process. Some users have reported that [this model](https://huggingface.co/elinas/llama-30b-hf-transformers-4.29) can be used as a base for the XOR conversion. This will also allow you to skip to Step 7. However, we only support conversion starting from LLaMA original checkpoint and cannot provide support if you experience issues with this alternative approach.* **Important: Follow these exact steps to convert your original LLaMA checkpoint to a HuggingFace Transformers-compatible format. If you use the wrong versions of any dependency, you risk ending up with weights which are not compatible with the XOR files.** 1. Create a clean Python **3.10** virtual environment & activate it: ``` python3.10 -m venv xor_venv source xor_venv/bin/activate ``` 2. Clone transformers repo and switch to tested version: ``` git clone https://github.com/huggingface/transformers.git cd transformers git checkout d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c pip install . ``` 3. Install **exactly** these dependency versions: ``` pip install torch==1.13.1 accelerate==0.18.0 sentencepiece==0.1.98 protobuf==3.20.1 ``` 4. Check `pip freeze` output: ``` accelerate==0.18.0 certifi==2022.12.7 charset-normalizer==3.1.0 filelock==3.12.0 huggingface-hub==0.13.4 idna==3.4 numpy==1.24.2 nvidia-cublas-cu11==11.10.3.66 nvidia-cuda-nvrtc-cu11==11.7.99 nvidia-cuda-runtime-cu11==11.7.99 nvidia-cudnn-cu11==8.5.0.96 packaging==23.1 protobuf==3.20.1 psutil==5.9.5 PyYAML==6.0 regex==2023.3.23 requests==2.28.2 sentencepiece==0.1.98 tokenizers==0.13.3 torch==1.13.1 tqdm==4.65.0 transformers @ file:///mnt/data/koepf/transformers typing_extensions==4.5.0 urllib3==1.26.15 ``` 5. While in `transformers` repo root, run HF LLaMA conversion script: ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir <input_path_llama_base> --output_dir <output_path_llama30b_hf> --model_size 30B ``` 6. Run `find . -type f -exec md5sum "{}" +` in the conversion target directory (`output_dir`). This should produce exactly the following checksums if your files are correct: ``` 462a2d07f65776f27c0facfa2affb9f9 ./pytorch_model-00007-of-00007.bin e1dc8c48a65279fb1fbccff14562e6a3 ./pytorch_model-00003-of-00007.bin 9cffb1aeba11b16da84b56abb773d099 ./pytorch_model-00001-of-00007.bin aee09e21813368c49baaece120125ae3 ./generation_config.json 92754d6c6f291819ffc3dfcaf470f541 ./pytorch_model-00005-of-00007.bin 3eddc6fc02c0172d38727e5826181adb ./pytorch_model-00004-of-00007.bin eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model 99762d59efa6b96599e863893cf2da02 ./pytorch_model-00006-of-00007.bin 598538f18fed1877b41f77de034c0c8a ./config.json fdb311c39b8659a5d5c1991339bafc09 ./tokenizer.json fecfda4fba7bfd911e187a85db5fa2ef ./pytorch_model.bin.index.json edd1a5897748864768b1fab645b31491 ./tokenizer_config.json 6b2e0a735969660e720c27061ef3f3d3 ./special_tokens_map.json 5cfcb78b908ffa02e681cce69dbe4303 ./pytorch_model-00002-of-00007.bin ``` **Important: You should now have the correct LLaMA weights and be ready to apply the XORs. If the checksums above do not match yours, there is a problem.** 7. Once you have LLaMA weights in the correct format, you can apply the XOR decoding: ``` python xor_codec.py oasst-sft-7-llama-30b/ oasst-sft-7-llama-30b-xor/ llama30b_hf/ ``` You should **expect to see one warning message** during execution: `Exception when processing 'added_tokens.json'` This is normal. **If similar messages appear for other files, something has gone wrong**. 8. Now run `find . -type f -exec md5sum "{}" +` in the output directory (here `oasst-sft-6-llama-30b`). You should get a file with exactly these checksums: ``` 8ae4537c64a1ef202d1d82eb0d356703 ./pytorch_model-00007-of-00007.bin d84f99d23369e159e50cb0597b6c9673 ./pytorch_model-00003-of-00007.bin f7de50a725d678eb65cc3dced727842f ./pytorch_model-00001-of-00007.bin 27b0dc092f99aa2efaf467b2d8026c3f ./added_tokens.json aee09e21813368c49baaece120125ae3 ./generation_config.json 31a2b04b139f4af043ad04478f1497f5 ./pytorch_model-00005-of-00007.bin a16a2dfacbde77a1659a7c9df7966d0a ./pytorch_model-00004-of-00007.bin eeec4125e9c7560836b4873b6f8e3025 ./tokenizer.model baa778a8679d47b085446faf97b72758 ./pytorch_model-00006-of-00007.bin b2d64f2198ab7b53e3b8d12fbcadeb3c ./config.json deb33dd4ffc3d2baddcce275a00b7c1b ./tokenizer.json 76d47e4f51a8df1d703c6f594981fcab ./pytorch_model.bin.index.json ed59bfee4e87b9193fea5897d610ab24 ./tokenizer_config.json 704373f0c0d62be75e5f7d41d39a7e57 ./special_tokens_map.json e836168cdbbb74db51d04f25ed6408ce ./pytorch_model-00002-of-00007.bin ``` If so you have successfully decoded the weights and should be able to use the model with HuggingFace Transformers. **If your checksums do not match those above, there is a problem.** ### Configuration ``` llama-30b-sft-7: dtype: fp16 log_dir: "llama_log_30b" learning_rate: 1e-5 model_name: /home/ubuntu/Open-Assistant/model/model_training/.saved/llama-30b-super-pretrain/checkpoint-3500 #model_name: OpenAssistant/llama-30b-super-pretrain output_dir: llama_model_30b deepspeed_config: configs/zero3_config_sft.json weight_decay: 0.0 residual_dropout: 0.0 max_length: 2048 use_flash_attention: true warmup_steps: 20 gradient_checkpointing: true gradient_accumulation_steps: 12 per_device_train_batch_size: 2 per_device_eval_batch_size: 3 eval_steps: 101 save_steps: 485 num_train_epochs: 4 save_total_limit: 3 use_custom_sampler: true sort_by_length: false #save_strategy: steps save_strategy: epoch datasets: - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk" input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz val_split: 0.05 - vicuna: val_split: 0.05 max_val_set: 800 fraction: 1.0 - dolly15k: val_split: 0.05 max_val_set: 300 - grade_school_math_instructions: val_split: 0.05 - code_alpaca: val_split: 0.05 max_val_set: 250 ``` - **OASST dataset paper:** https://arxiv.org/abs/2304.07327