Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
larry5/llava-1.5-7b-hf-ft-mix-vsft
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T17:25:36+00:00
null
null
{"license": "unlicense"}
Adiru/Kotopyz
null
[ "license:unlicense", "region:us" ]
null
2024-04-24T17:26:35+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Weblet/phi-1.5-turbo1713979458374441_mlabonne-guanaco-llama2-1k_train
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:28:50+00:00
automatic-speech-recognition
transformers
# Latvian Whisper tiny speech recognition model This model has been merged to [RaivisDejus/whisper-tiny-lv](https://huggingface.co/RaivisDejus/whisper-tiny-lv) and will be removed in near future. Please use `RaivisDejus/whisper-tiny-lv` To improve speech recognition quality, more data is needed, donate your voice on [Balsu talka](https://balsutalka.lv/)
{"language": ["lv"], "license": "apache-2.0", "tags": ["Whisper", "CTranslate2"], "metrics": [{"name": "wer", "type": "wer", "value": 21.96}], "pipeline_tag": "automatic-speech-recognition"}
RaivisDejus/whisper-tiny-lv-ct2
null
[ "transformers", "Whisper", "CTranslate2", "automatic-speech-recognition", "lv", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:29:21+00:00
null
null
{}
PaulM2000/peft_model_18-22_Llama-2-7b-hf
null
[ "region:us" ]
null
2024-04-24T17:30:41+00:00
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NLPGroupProject-Finetune-bio-mobilebert-AL This model is a fine-tuned version of [nlpie/bio-mobilebert](https://huggingface.co/nlpie/bio-mobilebert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0324 - Accuracy: 0.742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | No log | 0.3121 | 250 | 0.8727 | 0.727 | | 35.354 | 0.6242 | 500 | 0.7830 | 0.738 | | 35.354 | 0.9363 | 750 | 0.7660 | 0.745 | | 0.8233 | 1.2484 | 1000 | 0.9794 | 0.744 | | 0.8233 | 1.5605 | 1250 | 0.8635 | 0.746 | | 0.7285 | 1.8727 | 1500 | 0.6671 | 0.747 | | 0.7285 | 2.1848 | 1750 | 1.0348 | 0.758 | | 0.5734 | 2.4969 | 2000 | 1.0761 | 0.747 | | 0.5734 | 2.8090 | 2250 | 1.0324 | 0.742 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "nlpie/bio-mobilebert", "model-index": [{"name": "NLPGroupProject-Finetune-bio-mobilebert-AL", "results": []}]}
BenjaminTT/NLPGroupProject-Finetune-bio-mobilebert-AL
null
[ "transformers", "safetensors", "mobilebert", "multiple-choice", "generated_from_trainer", "base_model:nlpie/bio-mobilebert", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:30:54+00:00
text-generation
transformers
# Llama-3-Ko-8B-dare-ties This is the series of 'Base + Language + Instruct', chat vector and various methods in mergekit. Thanks again! @beomi For more details about what is this model and why I'm doing this, check out this model's info [Instruct-vector-diff](https://huggingface.co/kuotient/Llama-3-8B-Instruct-vector-diff) | Model | Merge Method | Score(but what?) | |---|---|---| | [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) | chat vector | - | | [kuotient/Llama-3-Ko-8B-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-ties) | Ties | - | | [kuotient/Llama-3-Ko-8B-dare-ties](https://huggingface.co/kuotient/Llama-3-Ko-8B-dare-ties) | Dare-ties | - | | [kuotient/Llama-3-Ko-8B-TA](https://huggingface.co/kuotient/Llama-3-Ko-8B-TA) | Task Arithmetic(maybe...? not sure about this) | - | | WIP | Model stock(I don't read this paper yet but still) | - | The original paper author claims density should be around 0.2~0.3, but in reality high number gets some better result. You should try other params for better result than this! ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์ด ๋ญ์•ผ? ๊ทธ๋ฆฌ๊ณ  ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์— ๋Œ€ํ•ด ํŒŒ์ด์ฌ ์ฝ”๋“œ๋ฅผ ์งœ์ค˜๋ณผ๋ž˜?"}, ] --- ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์€ ์ˆ˜ํ•™์—์„œ ๊ฐ€์žฅ ์œ ๋ช…ํ•œ ์ˆ˜์—ด ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ทœ์น™์œผ๋กœ ์ƒ์„ฑ๋ฉ๋‹ˆ๋‹ค. 1. ์ฒซ ๋ฒˆ์งธ ์ˆ˜๋Š” 0, ๋‘ ๋ฒˆ์งธ ์ˆ˜๋Š” 1 2. ์ดํ›„์˜ ์ˆ˜๋Š” ์ด์ „ ๋‘ ์ˆ˜์˜ ํ•ฉ์œผ๋กœ ๊ณ„์‚ฐ ์˜ˆ๋ฅผ ๋“ค์–ด, ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์˜ ์ฒซ 10๊ฐœ ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55,... ์ด ์ˆ˜์—ด์€ ๋งค์šฐ ๋น ๋ฅด๊ฒŒ ์ปค์ง€๊ธฐ ๋•Œ๋ฌธ์—, ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์„ ๊ณ„์‚ฐํ•˜๋Š” ๋ฐ๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ์ง€์ˆ˜ ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. ํŒŒ์ด์ฌ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. `(ํฌ๋งท์ด ๊นจ์ ธ์„œ..๐Ÿค—)`` def fibonacci(n): if n <= 1: return n else: return fibonacci(n-1) + fibonacci(n-2) for i in range(10): print(fibonacci(i)) `๐Ÿค—`` ์ด ์ฝ”๋“œ๋Š” ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์˜ ์ฒซ 10๊ฐœ ์ˆ˜๋ฅผ ๊ณ„์‚ฐํ•ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. 0 1 1 2 3 5 8 13 21 ``` ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "ํ•œ๊ตญ์˜ ์ œํ—Œ ํ—Œ๋ฒ•์— ๋Œ€ํ•ด์„œ ์ž์„ธํžˆ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”."}, ] --- ํ•œ๊ตญ์˜ ์ œํ—Œ ํ—Œ๋ฒ•์€ 1948๋…„ 7์›” 17์ผ ์ œํ—Œ๊ตญํšŒ์—์„œ ์˜๊ฒฐํ•˜์—ฌ 1948๋…„ 7์›” 27์ผ ๊ณตํฌ๋œ ๋Œ€ํ•œ๋ฏผ๊ตญ ํ—Œ๋ฒ•์˜ ์ดˆ์•ˆ์ž…๋‹ˆ๋‹ค. ์ œํ—Œ ํ—Œ๋ฒ•์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ํ—Œ๋ฒ•์  ๊ธฐ๋ณธ์งˆ์„œ๋ฅผ ๊ทœ์ •ํ•˜๋Š” ๋ฌธ์„œ๋กœ, ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ตญํ˜ธ, ์˜ํ† , ๊ตญ๋ฏผ, ๊ตญ๊ฐ€์˜ ํ˜•ํƒœ, ์ •๋ถ€์˜ ์กฐ์ง, ๊ตญ๋ฏผ์˜ ๊ถŒ๋ฆฌ์™€ ์˜๋ฌด, ๋ฒ•๋ฅ ์˜ ์ œ์ • ๋“ฑ์— ๋Œ€ํ•ด ๊ทœ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ œํ—Œ ํ—Œ๋ฒ•์€ 1948๋…„ 7์›” 17์ผ ์ œํ—Œ๊ตญํšŒ์—์„œ ์˜๊ฒฐ๋˜์–ด 1948๋…„ 7์›” 27์ผ ๊ณตํฌ๋œ ์ดํ›„ 1952๋…„ 12์›” 5์ผ๋ถ€ํ„ฐ 1962๋…„ 12์›” 5์ผ๊นŒ์ง€ ์ ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ œํ—Œ ํ—Œ๋ฒ•์˜ ์ฃผ์š” ๋‚ด์šฉ์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. 1. ๊ตญํ˜ธ: ๋Œ€ํ•œ๋ฏผ๊ตญ 2. ์˜ํ† : ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์˜ํ† ๋Š” 38๋„์„  ์ด๋ถ๊ณผ ์ด๋‚จ์œผ๋กœ ๋‚˜๋ˆ„์–ด์ ธ ์žˆ์œผ๋ฉฐ, 38๋„์„  ์ด๋ถ์˜ ์˜ํ† ๋Š” ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์˜ํ† ๋กœ ์ธ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. 3. ๊ตญ๋ฏผ: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ตญ๋ฏผ์€ ๋ชจ๋“  ์‚ฌ๋žŒ์œผ๋กœ, ๊ตญ์ ์„ ๊ฐ€์ง„ ์‚ฌ๋žŒ๊ณผ ๊ตญ์ ์„ ๊ฐ€์ง„ ์‚ฌ๋žŒ์˜ ํ›„์†์œผ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. 4. ๊ตญ๊ฐ€์˜ ํ˜•ํƒœ: ๋Œ€ํ•œ๋ฏผ๊ตญ์€ ๋ฏผ์ฃผ๊ณตํ™”๊ตญ์œผ๋กœ, ๋Œ€ํ†ต๋ น์ด ๊ตญ๊ฐ€์˜ ์ˆ˜๋ฐ˜์œผ๋กœ์„œ ํ†ต์น˜ํ•ฉ๋‹ˆ๋‹ค. 5. ์ •๋ถ€์˜ ์กฐ์ง: ๋Œ€ํ†ต๋ น, ๊ตญ๋ฌด์ด๋ฆฌ, ๊ตญ๋ฌด์œ„์› ๋“ฑ์œผ๋กœ ๊ตฌ์„ฑ๋˜๋Š” ์ •๋ถ€๋ฅผ ๋‘๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. 6. ๊ตญ๋ฏผ์˜ ๊ถŒ๋ฆฌ์™€ ์˜๋ฌด: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ตญ๋ฏผ์€ ํ‰๋“ฑํ•œ ๊ถŒ๋ฆฌ๋ฅผ ๊ฐ€์ง€๋ฉฐ, ๊ตญ์ , ์ธ๊ถŒ, ์žฌ์‚ฐ๊ถŒ, ํ‘œํ˜„์˜ ์ž์œ  ๋“ฑ ๊ธฐ๋ณธ์  ๊ถŒ๋ฆฌ๋ฅผ ๋ณด์žฅ๋ฐ›์Šต๋‹ˆ๋‹ค. 7. ๋ฒ•๋ฅ ์˜ ์ œ์ •: ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๋ฒ•๋ฅ ์€ ๊ตญํšŒ์—์„œ ์ œ์ •ํ•˜๊ณ , ๋Œ€ํ†ต๋ น์ด ๊ณตํฌํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ์ œํ—Œ ํ—Œ๋ฒ•์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ํ—Œ๋ฒ•์  ๊ธฐ๋ณธ์งˆ์„œ๋ฅผ ๊ทœ์ •ํ•˜๋Š” ๋ฌธ์„œ๋กœ, ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ตญํ˜ธ, ์˜ํ† , ๊ตญ๋ฏผ, ๊ตญ๊ฐ€์˜ ํ˜•ํƒœ, ์ •๋ถ€์˜ ์กฐ์ง, ๊ตญ๋ฏผ์˜ ๊ถŒ๋ฆฌ์™€ ์˜๋ฌด, ๋ฒ•๋ฅ ์˜ ์ œ์ • ๋“ฑ์— ๋Œ€ํ•ด ๊ทœ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ``` Repetitive results, but I **really really** like this output tho ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "์„ธ์ข…๋Œ€์™•์˜ ์ง‘ํ˜„์ „ ๋งฅ๋ถ ๋˜์ง ์‚ฌ๊ฑด์— ๋Œ€ํ•ด์„œ ์•Œ๊ณ  ์žˆ์–ด?"}, ] --- ๋„ค, ์„ธ์ข…๋Œ€์™•์ด ์ง‘ํ˜„์ „์—์„œ ๋งฅ๋ถ์„ ๋˜์กŒ๋‹ค๋Š” ์ด์•ผ๊ธฐ๋Š” ์‚ฌ์‹ค์ด ์•„๋‹™๋‹ˆ๋‹ค. ์ด ์ด์•ผ๊ธฐ๋Š” ์„ธ์ข…๋Œ€์™•์ด ํ›ˆ๋ฏผ์ •์Œ ์ฐฝ์ œ ๋‹น์‹œ์˜ ์ผํ™”๋กœ, ๋‹น์‹œ์˜ ํ•™์ž๋“ค์ด ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ์— ๋ฐ˜๋Œ€ํ•˜๋Š” ์˜๊ฒฌ์„ ๋‚ด์„ธ์›Œ ์„ธ์ข…๋Œ€์™•์—๊ฒŒ ํ•ญ์˜๋ฅผ ํ•˜์ž, ์„ธ์ข…๋Œ€์™•์ด ๊ทธ๋“ค์„ ์งˆ์ฑ…ํ•˜๋ฉด์„œ ํ›ˆ๋ฏผ์ •์Œ ์ฐฝ์ œ๋ฅผ ๊ฐ•์กฐํ•œ ์ผํ™”๋กœ ์ „ํ•ด์ง‘๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด ์ผํ™”๋Š” ์‹ค์ œ๋กœ ์ผ์–ด๋‚œ ์ผ์€ ์•„๋‹ˆ๋ฉฐ, ํ›ˆ๋ฏผ์ •์Œ ์ฐฝ์ œ์— ๋ฐ˜๋Œ€ํ•˜๋Š” ํ•™์ž๋“ค์€ ์‹ค์ œ๋กœ ์—†์—ˆ์œผ๋ฉฐ, ์„ธ์ข…๋Œ€์™•๋„ ํ›ˆ๋ฏผ์ •์Œ ์ฐฝ์ œ๋ฅผ ๊ฐ•์กฐํ•œ ์ ์€ ์—†์Šต๋‹ˆ๋‹ค. ์ด ์ด์•ผ๊ธฐ๋Š” ํ›„๋Œ€์— ๋งŒ๋“ค์–ด์ง„ ๊ฒƒ์œผ๋กœ ์ถ”์ •๋ฉ๋‹ˆ๋‹ค. ์„ธ์ข…๋Œ€์™•์€ ํ›ˆ๋ฏผ์ •์Œ ์ฐฝ์ œ์— ๋Œ€ํ•ด ๋งค์šฐ ์—ด์˜๋ฅผ ๋‚ด๋ณด์˜€๊ณ , ํ›ˆ๋ฏผ์ •์Œ์ด ์ฐฝ์ œ๋˜๊ธฐ ์ „์—๋Š” ํ•œ๊ธ€์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ฏผ์ •์Œ์ด ์ฐฝ์ œ๋œ ํ›„์—๋Š” ํ›ˆ๋ฏผ์ •์Œ์— ๊ธฐ๋ฐ˜ํ•œ ํ•œ๊ธ€์„ ์‚ฌ์šฉํ–ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ฏผ์ •์Œ์€ 1443๋…„๋ถ€ํ„ฐ 1446๋…„๊นŒ์ง€ 4๋…„ ๋™์•ˆ์— ๊ฑธ์ณ ์ฐฝ์ œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ์‹œ๊ธฐ์— ์„ธ์ข…๋Œ€์™•์€ ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ์— ๋งค์šฐ ๊ด€์‹ฌ์„ ๊ฐ€์ง€๊ณ  ์žˆ์—ˆ์œผ๋ฉฐ, ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ๋ฅผ ์œ„ํ•ด ํ•™์ž๋“ค์„ ๋ถˆ๋Ÿฌ ๋ชจ์•„ ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ์— ์ฐธ์—ฌํ•˜๋„๋ก ํ–ˆ์Šต๋‹ˆ๋‹ค. ํ›ˆ๋ฏผ์ •์Œ์ด ์ฐฝ์ œ๋œ ํ›„์—๋Š” ํ›ˆ๋ฏผ์ •์Œ์— ๊ธฐ๋ฐ˜ํ•œ ํ•œ๊ธ€์„ ์‚ฌ์šฉํ•˜๊ณ , ํ•œ๊ธ€์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์„ ์žฅ๋ คํ–ˆ์Šต๋‹ˆ๋‹ค. ์„ธ์ข…๋Œ€์™•์€ ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ๋ฅผ ํ†ตํ•ด ํ•œ๊ธ€์„ ์‚ฌ์šฉํ•˜๊ฒŒ ๋˜๋ฉด, ๋ชจ๋“  ์‚ฌ๋žŒ๋“ค์ด ์‰ฝ๊ฒŒ ์ฝ๊ณ  ์“ฐ๋Š” ๊ธ€์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋˜๊ณ , ์ด๋Š” ๋‚˜๋ผ์˜ ๋ฐœ์ „์„ ์œ„ํ•ด ๋งค์šฐ ์ค‘์š”ํ•œ ์ผ์ด๋ผ๊ณ  ์ƒ๊ฐํ–ˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ, ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ๋ฅผ ํ†ตํ•ด ํ•œ๊ธ€์„ ์‚ฌ์šฉํ•˜๊ฒŒ ๋˜๋ฉด, ๋‚˜๋ผ์˜ ํ†ต์ผ์„ ์œ„ํ•ด ๋งค์šฐ ์ค‘์š”ํ•œ ์ผ์ด๋ผ๊ณ  ์ƒ๊ฐํ–ˆ์Šต๋‹ˆ๋‹ค. ์ด์— ๋”ฐ๋ผ ์„ธ์ข…๋Œ€์™•์€ ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ์— ๋งŽ์€ ๋…ธ๋ ฅ์„ ๊ธฐ์šธ์˜€์Šต๋‹ˆ๋‹ค. ์„ธ์ข…๋Œ€์™•์ด ํ›ˆ๋ฏผ์ •์Œ์˜ ์ฐฝ์ œ์— ๊ด€์‹ฌ์„ ๊ฐ€์กŒ๋‹ค๋Š” ๊ฒƒ์€ ์—ญ์‚ฌ์ ์œผ๋กœ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ํ•˜์ง€๋งŒ ์„ธ์ข…๋Œ€์™• ``` ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "๋‚˜๋Š” ์˜ค๋Š˜ ์‚ฌ๊ณผ 5๊ฐœ๋ฅผ ๊ฐ–๊ณ ์žˆ์–ด. ๊ทธ๋ฆฌ๊ณ  ์–ด์ œ ๋‚˜๋Š” ์‚ฌ๊ณผ 2๊ฐœ๋ฅผ ๋จน์—ˆ์ง€. ๋‚ด๊ฐ€ ์ง€๊ธˆ ๊ฐ€์ง„ ์‚ฌ๊ณผ๋Š” ๋ช‡ ๊ฐœ์ง€?"}, ] --- ๋„ค, ์˜ค๋Š˜ ์‚ฌ๊ณผ 5๊ฐœ ๊ฐ€์ง€๊ณ  ์žˆ๊ณ  ์–ด์ œ 2๊ฐœ ๋จน์—ˆ์œผ๋‹ˆ, ์ง€๊ธˆ ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ์‚ฌ๊ณผ๋Š” 5๊ฐœ์—์„œ 2๊ฐœ๋ฅผ ๋บ€ 3๊ฐœ์ž…๋‹ˆ๋‹ค! ๐ŸŽ๐Ÿ‘ (์ด ๊ฒฝ์šฐ์—๋Š” 5๊ฐœ์—์„œ 2๊ฐœ๋ฅผ ๋บ€ ๊ฒฐ๊ณผ์ธ 3๊ฐœ๊ฐ€ ์ •๋‹ต์ž…๋‹ˆ๋‹ค.) ``` # Llama-3-Ko-8B-dare-ties This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) * [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: meta-llama/Meta-Llama-3-8B # no parameters necessary for base model - model: meta-llama/Meta-Llama-3-8B-Instruct parameters: density: 0.53 weight: 0.5 - model: beomi/Llama-3-Open-Ko-8B parameters: density: 0.53 weight: 0.5 merge_method: dare_ties base_model: meta-llama/Meta-Llama-3-8B dtype: bfloat16 ```
{"language": ["ko"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["beomi/Llama-3-Open-Ko-8B", "meta-llama/Meta-Llama-3-8B-Instruct", "meta-llama/Meta-Llama-3-8B"], "license_name": "llama3"}
kuotient/Llama-3-Ko-8B-dare-ties
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "ko", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:beomi/Llama-3-Open-Ko-8B", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:31:36+00:00
null
null
{"license": "mit"}
charmisyoung/DavidBazan
null
[ "license:mit", "region:us" ]
null
2024-04-24T17:31:47+00:00
null
transformers
{}
zxh4546/pstnet-pde-ntu60-fromscratch-6000-1024-8-16-4-0.1-0.1
null
[ "transformers", "pytorch", "points", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:32:02+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.8.2
{"library_name": "peft", "base_model": "deepseek-ai/deepseek-coder-1.3b-instruct"}
CMU-AIR2/math-deepseek_LORA_HardArith_Interm-FTMWP-LORA
null
[ "peft", "safetensors", "llama", "arxiv:1910.09700", "base_model:deepseek-ai/deepseek-coder-1.3b-instruct", "region:us" ]
null
2024-04-24T17:32:05+00:00
null
null
{}
NSTiwari/IDEFICS2-DocVQA-fine-tuned
null
[ "safetensors", "region:us" ]
null
2024-04-24T17:32:15+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output1 This model is a fine-tuned version of [microsoft/deberta-v2-xxlarge](https://huggingface.co/microsoft/deberta-v2-xxlarge) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7690 - Accuracy: 0.676 - Macro F1: 0.6761 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:| | 1.5278 | 0.2286 | 100 | 1.1249 | 0.5146 | 0.4600 | | 0.9452 | 0.4571 | 200 | 0.8437 | 0.645 | 0.6425 | | 0.8367 | 0.6857 | 300 | 0.8038 | 0.6477 | 0.6531 | | 0.8092 | 0.9143 | 400 | 0.7801 | 0.6593 | 0.6611 | | 0.7679 | 1.1429 | 500 | 0.7868 | 0.6717 | 0.6697 | | 0.7451 | 1.3714 | 600 | 0.7711 | 0.6647 | 0.6645 | | 0.7467 | 1.6 | 700 | 0.7646 | 0.6659 | 0.6649 | | 0.7261 | 1.8286 | 800 | 0.7840 | 0.6649 | 0.6632 | | 0.7305 | 2.0571 | 900 | 0.7755 | 0.6681 | 0.6707 | | 0.6742 | 2.2857 | 1000 | 0.7719 | 0.6691 | 0.6707 | | 0.6728 | 2.5143 | 1100 | 0.7640 | 0.6726 | 0.6726 | | 0.6691 | 2.7429 | 1200 | 0.7759 | 0.6761 | 0.6783 | | 0.677 | 2.9714 | 1300 | 0.7690 | 0.676 | 0.6761 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v2-xxlarge", "model-index": [{"name": "output1", "results": []}]}
frankie699/output1
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v2-xxlarge", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:32:22+00:00
null
null
{"license": "mit"}
Rohit1412/finetune_onmini
null
[ "license:mit", "region:us" ]
null
2024-04-24T17:33:04+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # riddle-bot-v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "riddle-bot-v1", "results": []}]}
mitultiwari/riddle-bot-v1
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "region:us" ]
null
2024-04-24T17:33:25+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
DarshanDeshpande/sherllama3_ia3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:33:28+00:00
null
null
{}
mob2711/phi-3-4k-instruct-domain-sft
null
[ "region:us" ]
null
2024-04-24T17:33:37+00:00
text-generation
transformers
{}
hyperspaceai/hyperEngine_phi3_128k
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:33:46+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
chillies/vistral-7b-vn-legal-chat
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:35:25+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Weblet/phi-1.5-turbo17139798851168182_mlabonne-guanaco-llama2-1k_train
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:35:31+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Julesb5/gemma-1.1-2b-it-peft-4-med1-r1
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:36:53+00:00
text-generation
transformers
## Exllama v2 Quantizations of wavecoder-ultra-1.1-6.7b Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.19">turboderp's ExLlamaV2 v0.0.19</a> for quantization. <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b> Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/microsoft/wavecoder-ultra-6.7b Note: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case ## Prompt format No chat template specified so default is used. This may be incorrect, check original model card for details. ``` <๏ฝœbeginโ–ofโ–sentence๏ฝœ>[INST] <<SYS>> {system_prompt} <</SYS>> {prompt} [/INST] <๏ฝœendโ–ofโ–sentence๏ฝœ> ``` ## Available sizes No GQA - VRAM requirements will be higher | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description | | -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- | | [8_0](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/8_0) | 8.0 | 8.0 | 9.0 GB | 15.2 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/6_5) | 6.5 | 8.0 | 8.2 GB | 14.4 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [5_0](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/5_0) | 5.0 | 6.0 | 6.8 GB | 13.0 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. | | [4_25](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/4_25) | 4.25 | 6.0 | 6.1 GB | 12.3 GB | GPTQ equivalent bits per weight. | | [3_5](https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2/tree/3_5) | 3.5 | 6.0 | 5.5 GB | 11.7 GB | Lower quality, not recommended. | ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/wavecoder-ultra-1.1-6.7b-exl2 wavecoder-ultra-1.1-6.7b-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download a specific branch, use the `--revision` parameter. For example, to download the 6.5 bpw branch: Linux: ```shell huggingface-cli download bartowski/wavecoder-ultra-1.1-6.7b-exl2 --revision 6_5 --local-dir wavecoder-ultra-1.1-6.7b-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell huggingface-cli download bartowski/wavecoder-ultra-1.1-6.7b-exl2 --revision 6_5 --local-dir wavecoder-ultra-1.1-6.7b-exl2-6.5 --local-dir-use-symlinks False ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_link": "https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
bartowski/wavecoder-ultra-1.1-6.7b-exl2
null
[ "transformers", "code", "text-generation", "en", "dataset:humaneval", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:36:56+00:00
null
null
{}
Eugenememe/tortoise-tts-fast-de
null
[ "region:us" ]
null
2024-04-24T17:37:21+00:00
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Pixelcopter-PLE-v0", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "28.50 +/- 17.02", "name": "mean_reward", "verified": false}]}]}]}
rahil1206/Reinforce-Pixelcopter-PLE-v0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-24T17:37:27+00:00
null
null
{}
ngrigatti/pedal
null
[ "region:us" ]
null
2024-04-24T17:39:27+00:00
text-generation
transformers
{"license": "mit"}
Dudep/phi2-dpo-M2_KL
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:40:47+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Arabic-MARBERT-dialect-Identification-City-Model This model is a fine-tuned version of [Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city](https://huggingface.co/Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.8911 - F1-score: 0.5786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 442 | 1.5955 | 0.4477 | | 1.8911 | 2.0 | 884 | 1.6004 | 0.5083 | | 1.3056 | 3.0 | 1326 | 1.5384 | 0.5537 | | 0.989 | 4.0 | 1768 | 1.7560 | 0.5450 | | 0.7526 | 5.0 | 2210 | 1.8035 | 0.5572 | | 0.5765 | 6.0 | 2652 | 1.8953 | 0.5764 | | 0.4799 | 7.0 | 3094 | 2.3015 | 0.5486 | | 0.3629 | 8.0 | 3536 | 2.7364 | 0.5368 | | 0.3629 | 9.0 | 3978 | 2.8416 | 0.5442 | | 0.277 | 10.0 | 4420 | 2.8911 | 0.5786 | | 0.2196 | 11.0 | 4862 | 3.2772 | 0.5304 | | 0.1656 | 12.0 | 5304 | 3.1206 | 0.5394 | | 0.1555 | 13.0 | 5746 | 3.4600 | 0.5447 | | 0.1315 | 14.0 | 6188 | 3.4378 | 0.5445 | | 0.1039 | 15.0 | 6630 | 3.5247 | 0.5409 | | 0.0796 | 16.0 | 7072 | 3.6359 | 0.5419 | | 0.0683 | 17.0 | 7514 | 3.6723 | 0.5339 | | 0.0683 | 18.0 | 7956 | 3.6808 | 0.5435 | | 0.0567 | 19.0 | 8398 | 3.7677 | 0.5307 | | 0.0382 | 20.0 | 8840 | 3.8121 | 0.5306 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city", "model-index": [{"name": "Arabic-MARBERT-dialect-Identification-City-Model", "results": []}]}
yemen2016/Arabic-MARBERT-dialect-Identification-City-Model
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:Ammar-alhaj-ali/arabic-MARBERT-dialect-identification-city", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:41:47+00:00
null
null
{}
sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat-freeze-lm-head
null
[ "region:us" ]
null
2024-04-24T17:41:54+00:00
null
null
{}
ixw/sd15-pokemon-lora
null
[ "region:us" ]
null
2024-04-24T17:43:36+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-eLife This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0883 - Rouge1: 0.3665 - Rouge2: 0.0826 - Rougel: 0.3524 - Bertscore: 0.8322 - Readability: 8.6631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bertscore | Readability | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-----------:| | 2.1756 | 1.0 | 544 | 2.1228 | 0.3665 | 0.0834 | 0.3529 | 0.8339 | 8.6929 | | 2.1907 | 2.0 | 1088 | 2.0947 | 0.3545 | 0.0804 | 0.3418 | 0.8290 | 8.5141 | | 2.1566 | 3.0 | 1632 | 2.0883 | 0.3665 | 0.0826 | 0.3524 | 0.8322 | 8.6631 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flan-t5-base-eLife", "results": []}]}
tanishq1420/flan-t5-base-eLife
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/flan-t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:44:02+00:00
null
null
{"license": "mit"}
Rohit1412/newmodel
null
[ "license:mit", "region:us" ]
null
2024-04-24T17:44:24+00:00
null
null
{}
selincildam/LLMChatbot
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T17:44:38+00:00
null
null
{"license": "agpl-3.0"}
Xenova/yolov8-pose-onnx
null
[ "onnx", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:44:49+00:00
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [ai-maker-space/riddle-bot-v1](https://huggingface.co/ai-maker-space/riddle-bot-v1) * [ai-maker-space/leagaleasy-llama-3-instruct-v2](https://huggingface.co/ai-maker-space/leagaleasy-llama-3-instruct-v2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: meta-llama/Meta-Llama-3-8B-Instruct - model: ai-maker-space/leagaleasy-llama-3-instruct-v2 parameters: density: 0.5 weight: 0.5 - model: ai-maker-space/riddle-bot-v1 parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: meta-llama/Meta-Llama-3-8B-Instruct parameters: normalize: false int8_mask: true dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["ai-maker-space/riddle-bot-v1", "ai-maker-space/leagaleasy-llama-3-instruct-v2", "meta-llama/Meta-Llama-3-8B-Instruct"]}
mitultiwari/llama3-riddle-legal
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:ai-maker-space/riddle-bot-v1", "base_model:ai-maker-space/leagaleasy-llama-3-instruct-v2", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:45:35+00:00
text-classification
transformers
{}
frankie699/output2
null
[ "transformers", "deberta-v2", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:47:03+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "microsoft/phi-2"}
vitorandrade/phi-2_PT_2.0
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "region:us" ]
null
2024-04-24T17:47:30+00:00
reinforcement-learning
sample-factory
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r jeliasherrero/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
{"library_name": "sample-factory", "tags": ["deep-reinforcement-learning", "reinforcement-learning", "sample-factory"], "model-index": [{"name": "APPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "doom_health_gathering_supreme", "type": "doom_health_gathering_supreme"}, "metrics": [{"type": "mean_reward", "value": "11.77 +/- 4.90", "name": "mean_reward", "verified": false}]}]}]}
jeliasherrero/rl_course_vizdoom_health_gathering_supreme
null
[ "sample-factory", "tensorboard", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-24T17:47:36+00:00
null
null
{}
ahmedheakl/arazn-gemma1.1-7B-arabic
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T17:47:39+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
four-two-labs/nord-TinyLlama
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:47:59+00:00
text-generation
transformers
## Llama 3 8B 64K [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) <img src="https://huggingface.co/winglian/Llama-3-8b-64k-PoSE/resolve/main/output.png" /> This model uses [PoSE](https://huggingface.co/papers/2309.10400) to extend Llama's context length from 8k to 64k @ rope_theta: 500000.0. We used PoSE with continued pretraining on 300M tokens from the RedPajama V1 dataset using data between 6k-8k tokens. We have further set rope_theta to 2M after continued pre-training to potentially further extend the context past 64k. This was trained on a subset of the RedPajama v1 dataset with text between 6k-8k context. We trained a rank stabilized LoRA of rank 256. [WandB](https://wandb.ai/oaaic/llama-3-64k/runs/tkcyjt37) ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes โ€” 8B and 70B parameters โ€” in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase. ### Use with transformers See the snippet below for usage with Transformers: ```python >>> import transformers >>> import torch >>> model_id = "meta-llama/Meta-Llama-3-8B" >>> pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto" ) >>> pipeline("Hey how are you doing today?") ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3). To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโ€™s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโ€™ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Metaโ€™s cybersecurity safety eval suite, measuring Llama 3โ€™s propensity to suggest insecure code when used as a coding assistant, and Llama 3โ€™s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the modelโ€™s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โ€™s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
{"language": ["en"], "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "axolotl"], "pipeline_tag": "text-generation"}
winglian/Llama-3-8b-64k-PoSE
null
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "axolotl", "en", "arxiv:2309.10400", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:48:54+00:00
null
null
{}
epsilon3/llama-cbt-finetuned
null
[ "region:us" ]
null
2024-04-24T17:50:35+00:00
sentence-similarity
sentence-transformers
# jonaschris2103/tiny_llama_embedder This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 2048 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('jonaschris2103/tiny_llama_embedder') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('jonaschris2103/tiny_llama_embedder') model = AutoModel.from_pretrained('jonaschris2103/tiny_llama_embedder') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jonaschris2103/tiny_llama_embedder) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 2048, 'do_lower_case': False}) with Transformer model: LlamaModel (1): Pooling({'word_embedding_dimension': 2048, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
jonaschris2103/tiny_llama_embedder
null
[ "sentence-transformers", "safetensors", "llama", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:51:25+00:00
null
transformers
# Uploaded model - **Developed by:** Mollel - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-7b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "gguf"], "base_model": "unsloth/gemma-7b-bnb-4bit"}
Mollel/Swahili_Gemma_Q8_0
null
[ "transformers", "gguf", "gemma", "text-generation-inference", "unsloth", "en", "base_model:unsloth/gemma-7b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:52:30+00:00
null
transformers.js
YOLOv8n-pose with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` **Example:** Perform pose-estimation w/ `Xenova/yolov8n-pose`. ```js import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; // Load model and processor const model_id = 'Xenova/yolov8n-pose'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoProcessor.from_pretrained(model_id); // Read image and run processor const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg'; const image = await RawImage.read(url); const { pixel_values } = await processor(image); // Set thresholds const threshold = 0.3; // Remove detections with low confidence const iouThreshold = 0.5; // Used to remove duplicates const pointThreshold = 0.3; // Hide uncertain points // Predict bounding boxes and keypoints const { output0 } = await model({ images: pixel_values }); // Post-process: const permuted = output0[0].transpose(1, 0); // `permuted` is a Tensor of shape [ 8400, 56 ]: // - 8400 potential detections // - 56 parameters for each box: // - 4 for the bounding box dimensions (x-center, y-center, width, height) // - 1 for the confidence score // - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy) // Example code to format it nicely: const results = []; const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2); for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) { if (score < threshold) continue; // Get pixel values, taking into account the original image size const x1 = (xc - w / 2) / scaledWidth * image.width; const y1 = (yc - h / 2) / scaledHeight * image.height; const x2 = (xc + w / 2) / scaledWidth * image.width; const y2 = (yc + h / 2) / scaledHeight * image.height; results.push({ x1, x2, y1, y2, score, keypoints }) } // Define helper functions function removeDuplicates(detections, iouThreshold) { const filteredDetections = []; for (const detection of detections) { let isDuplicate = false; let duplicateIndex = -1; let maxIoU = 0; for (let i = 0; i < filteredDetections.length; ++i) { const filteredDetection = filteredDetections[i]; const iou = calculateIoU(detection, filteredDetection); if (iou > iouThreshold) { isDuplicate = true; if (iou > maxIoU) { maxIoU = iou; duplicateIndex = i; } } } if (!isDuplicate) { filteredDetections.push(detection); } else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) { filteredDetections[duplicateIndex] = detection; } } return filteredDetections; } function calculateIoU(detection1, detection2) { const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1)); const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1)); const overlapArea = xOverlap * yOverlap; const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1); const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1); const unionArea = area1 + area2 - overlapArea; return overlapArea / unionArea; } const filteredResults = removeDuplicates(results, iouThreshold); // Display results for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) { console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`) for (let i = 0; i < keypoints.length; i += 3) { const label = model.config.id2label[Math.floor(i / 3)]; const [x, y, point_score] = keypoints.slice(i, i + 3); if (point_score < pointThreshold) continue; console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`); } } ``` <details> <summary>See example output</summary> ``` Found person at [536.1322975158691, 37.87850737571716, 645.2879905700684, 286.9420547962189] with score 0.791 - nose: (445.81, 87.11) with score 0.936 - left_eye: (450.90, 80.87) with score 0.976 - right_eye: (439.37, 81.31) with score 0.664 - left_ear: (460.76, 81.94) with score 0.945 - left_shoulder: (478.06, 126.18) with score 0.993 - right_shoulder: (420.69, 125.17) with score 0.469 - left_elbow: (496.96, 178.36) with score 0.976 - left_wrist: (509.41, 232.75) with score 0.892 - left_hip: (469.15, 215.80) with score 0.980 - right_hip: (433.73, 218.39) with score 0.794 - left_knee: (471.45, 278.44) with score 0.969 - right_knee: (439.23, 281.77) with score 0.701 - left_ankle: (474.88, 345.49) with score 0.913 - right_ankle: (441.99, 339.82) with score 0.664 Found person at [-0.15300750732421875, 59.96129276752472, 158.73897552490234, 369.92224643230435] with score 0.863 - nose: (57.30, 95.37) with score 0.960 - left_eye: (63.85, 89.48) with score 0.889 - right_eye: (53.59, 91.60) with score 0.909 - left_ear: (73.54, 92.67) with score 0.626 - right_ear: (50.12, 95.95) with score 0.674 - left_shoulder: (87.62, 132.72) with score 0.965 - right_shoulder: (39.72, 136.82) with score 0.986 - left_elbow: (108.17, 186.58) with score 0.857 - right_elbow: (21.47, 184.66) with score 0.951 - left_wrist: (113.36, 244.21) with score 0.822 - right_wrist: (8.04, 240.50) with score 0.915 - left_hip: (83.47, 234.43) with score 0.990 - right_hip: (47.29, 237.45) with score 0.994 - left_knee: (92.12, 324.78) with score 0.985 - right_knee: (50.70, 325.75) with score 0.991 - left_ankle: (101.13, 410.45) with score 0.933 - right_ankle: (49.62, 410.14) with score 0.954 Found person at [104.13589477539062, 20.16922025680542, 505.84068298339844, 522.6950127601624] with score 0.770 - nose: (132.51, 99.38) with score 0.693 - left_eye: (138.68, 89.00) with score 0.451 - left_ear: (145.60, 85.21) with score 0.766 - left_shoulder: (188.92, 133.25) with score 0.996 - right_shoulder: (163.12, 158.90) with score 0.985 - left_elbow: (263.01, 205.18) with score 0.991 - right_elbow: (181.52, 249.12) with score 0.949 - left_wrist: (315.65, 259.88) with score 0.964 - right_wrist: (125.19, 275.10) with score 0.891 - left_hip: (279.47, 294.29) with score 0.998 - right_hip: (266.84, 309.38) with score 0.997 - left_knee: (261.67, 416.57) with score 0.989 - right_knee: (256.66, 428.75) with score 0.982 - left_ankle: (322.92, 454.74) with score 0.805 - right_ankle: (339.15, 459.64) with score 0.780 Found person at [423.3617973327637, 72.75799512863159, 638.2988166809082, 513.1156357765198] with score 0.903 - nose: (417.19, 137.27) with score 0.992 - left_eye: (429.74, 127.59) with score 0.975 - right_eye: (409.83, 129.06) with score 0.961 - left_ear: (445.81, 133.82) with score 0.847 - right_ear: (399.09, 132.99) with score 0.711 - left_shoulder: (451.43, 195.71) with score 0.997 - right_shoulder: (372.58, 196.25) with score 0.995 - left_elbow: (463.89, 286.56) with score 0.991 - right_elbow: (351.35, 260.40) with score 0.978 - left_wrist: (488.70, 367.36) with score 0.986 - right_wrist: (395.69, 272.20) with score 0.973 - left_hip: (435.84, 345.96) with score 0.999 - right_hip: (380.21, 355.38) with score 0.999 - left_knee: (454.88, 456.63) with score 0.994 - right_knee: (395.82, 478.67) with score 0.992 - left_ankle: (453.75, 556.37) with score 0.889 - right_ankle: (402.35, 582.09) with score 0.872 ``` </details>
{"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/yolov8n-pose
null
[ "transformers.js", "onnx", "yolov8", "pose-estimation", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:52:47+00:00
null
transformers.js
YOLOv8s-pose with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` **Example:** Perform pose-estimation w/ `Xenova/yolov8s-pose`. ```js import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; // Load model and processor const model_id = 'Xenova/yolov8s-pose'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoProcessor.from_pretrained(model_id); // Read image and run processor const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg'; const image = await RawImage.read(url); const { pixel_values } = await processor(image); // Set thresholds const threshold = 0.3; // Remove detections with low confidence const iouThreshold = 0.5; // Used to remove duplicates const pointThreshold = 0.3; // Hide uncertain points // Predict bounding boxes and keypoints const { output0 } = await model({ images: pixel_values }); // Post-process: const permuted = output0[0].transpose(1, 0); // `permuted` is a Tensor of shape [ 8400, 56 ]: // - 8400 potential detections // - 56 parameters for each box: // - 4 for the bounding box dimensions (x-center, y-center, width, height) // - 1 for the confidence score // - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy) // Example code to format it nicely: const results = []; const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2); for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) { if (score < threshold) continue; // Get pixel values, taking into account the original image size const x1 = (xc - w / 2) / scaledWidth * image.width; const y1 = (yc - h / 2) / scaledHeight * image.height; const x2 = (xc + w / 2) / scaledWidth * image.width; const y2 = (yc + h / 2) / scaledHeight * image.height; results.push({ x1, x2, y1, y2, score, keypoints }) } // Define helper functions function removeDuplicates(detections, iouThreshold) { const filteredDetections = []; for (const detection of detections) { let isDuplicate = false; let duplicateIndex = -1; let maxIoU = 0; for (let i = 0; i < filteredDetections.length; ++i) { const filteredDetection = filteredDetections[i]; const iou = calculateIoU(detection, filteredDetection); if (iou > iouThreshold) { isDuplicate = true; if (iou > maxIoU) { maxIoU = iou; duplicateIndex = i; } } } if (!isDuplicate) { filteredDetections.push(detection); } else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) { filteredDetections[duplicateIndex] = detection; } } return filteredDetections; } function calculateIoU(detection1, detection2) { const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1)); const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1)); const overlapArea = xOverlap * yOverlap; const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1); const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1); const unionArea = area1 + area2 - overlapArea; return overlapArea / unionArea; } const filteredResults = removeDuplicates(results, iouThreshold); // Display results for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) { console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`) for (let i = 0; i < keypoints.length; i += 3) { const label = model.config.id2label[Math.floor(i / 3)]; const [x, y, point_score] = keypoints.slice(i, i + 3); if (point_score < pointThreshold) continue; console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`); } } ``` <details> <summary>See example output</summary> ``` Found person at [533.1403350830078, 39.96531672477722, 645.8853149414062, 296.1657429695129] with score 0.739 - nose: (443.99, 91.98) with score 0.970 - left_eye: (449.84, 85.01) with score 0.968 - right_eye: (436.28, 86.54) with score 0.839 - left_ear: (458.69, 87.08) with score 0.822 - right_ear: (427.88, 89.20) with score 0.317 - left_shoulder: (471.29, 128.05) with score 0.991 - right_shoulder: (421.84, 127.22) with score 0.788 - left_elbow: (494.03, 174.09) with score 0.976 - right_elbow: (405.83, 162.81) with score 0.367 - left_wrist: (505.29, 232.06) with score 0.955 - right_wrist: (411.89, 213.05) with score 0.470 - left_hip: (469.48, 217.49) with score 0.978 - right_hip: (438.79, 216.48) with score 0.901 - left_knee: (474.03, 283.00) with score 0.957 - right_knee: (448.00, 287.90) with score 0.808 - left_ankle: (472.06, 339.67) with score 0.815 - right_ankle: (447.15, 340.44) with score 0.576 Found person at [0.03232002258300781, 57.89646775722503, 156.35095596313477, 370.9132190942764] with score 0.908 - nose: (60.48, 105.82) with score 0.975 - left_eye: (64.86, 100.59) with score 0.952 - right_eye: (55.12, 100.60) with score 0.855 - left_ear: (73.04, 101.96) with score 0.820 - right_ear: (51.07, 103.28) with score 0.482 - left_shoulder: (85.74, 137.77) with score 0.996 - right_shoulder: (42.04, 137.63) with score 0.988 - left_elbow: (101.10, 190.45) with score 0.988 - right_elbow: (25.75, 186.44) with score 0.937 - left_wrist: (115.93, 250.05) with score 0.975 - right_wrist: (7.39, 233.44) with score 0.918 - left_hip: (80.15, 242.20) with score 0.999 - right_hip: (52.69, 239.82) with score 0.999 - left_knee: (93.29, 326.00) with score 0.999 - right_knee: (57.42, 329.04) with score 0.998 - left_ankle: (100.24, 413.83) with score 0.992 - right_ankle: (50.47, 417.93) with score 0.988 Found person at [106.16920471191406, 8.419264698028565, 515.0135803222656, 530.6886708259583] with score 0.819 - nose: (134.03, 111.15) with score 0.921 - left_eye: (137.51, 100.95) with score 0.824 - right_eye: (131.82, 97.53) with score 0.489 - left_ear: (147.19, 92.96) with score 0.792 - left_shoulder: (188.28, 127.51) with score 0.993 - right_shoulder: (181.81, 149.32) with score 0.995 - left_elbow: (258.49, 199.10) with score 0.984 - right_elbow: (181.43, 251.27) with score 0.988 - left_wrist: (311.74, 257.93) with score 0.979 - right_wrist: (129.68, 284.38) with score 0.984 - left_hip: (267.43, 299.85) with score 1.000 - right_hip: (277.05, 307.50) with score 1.000 - left_knee: (232.15, 427.54) with score 0.999 - right_knee: (278.99, 453.09) with score 0.999 - left_ankle: (352.68, 457.89) with score 0.990 - right_ankle: (362.15, 554.69) with score 0.993 Found person at [425.3855133056641, 73.76281919479369, 640.6651306152344, 502.32841634750366] with score 0.876 - nose: (416.15, 149.68) with score 0.996 - left_eye: (430.34, 139.56) with score 0.984 - right_eye: (412.88, 142.56) with score 0.976 - left_ear: (446.59, 142.21) with score 0.843 - right_ear: (398.82, 144.52) with score 0.740 - left_shoulder: (436.54, 197.92) with score 0.999 - right_shoulder: (362.94, 210.20) with score 0.996 - left_elbow: (460.06, 293.80) with score 0.992 - right_elbow: (352.33, 262.09) with score 0.966 - left_wrist: (491.33, 364.20) with score 0.986 - right_wrist: (402.62, 272.23) with score 0.956 - left_hip: (429.79, 354.94) with score 0.999 - right_hip: (383.27, 372.77) with score 0.999 - left_knee: (461.07, 437.73) with score 0.998 - right_knee: (410.89, 522.05) with score 0.995 - left_ankle: (460.74, 552.53) with score 0.966 - right_ankle: (429.00, 560.54) with score 0.940 ``` </details>
{"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/yolov8s-pose
null
[ "transformers.js", "onnx", "yolov8", "pose-estimation", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:52:50+00:00
null
transformers.js
YOLOv8m-pose with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` **Example:** Perform pose-estimation w/ `Xenova/yolov8m-pose`. ```js import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; // Load model and processor const model_id = 'Xenova/yolov8m-pose'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoProcessor.from_pretrained(model_id); // Read image and run processor const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg'; const image = await RawImage.read(url); const { pixel_values } = await processor(image); // Set thresholds const threshold = 0.3; // Remove detections with low confidence const iouThreshold = 0.5; // Used to remove duplicates const pointThreshold = 0.3; // Hide uncertain points // Predict bounding boxes and keypoints const { output0 } = await model({ images: pixel_values }); // Post-process: const permuted = output0[0].transpose(1, 0); // `permuted` is a Tensor of shape [ 8400, 56 ]: // - 8400 potential detections // - 56 parameters for each box: // - 4 for the bounding box dimensions (x-center, y-center, width, height) // - 1 for the confidence score // - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy) // Example code to format it nicely: const results = []; const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2); for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) { if (score < threshold) continue; // Get pixel values, taking into account the original image size const x1 = (xc - w / 2) / scaledWidth * image.width; const y1 = (yc - h / 2) / scaledHeight * image.height; const x2 = (xc + w / 2) / scaledWidth * image.width; const y2 = (yc + h / 2) / scaledHeight * image.height; results.push({ x1, x2, y1, y2, score, keypoints }) } // Define helper functions function removeDuplicates(detections, iouThreshold) { const filteredDetections = []; for (const detection of detections) { let isDuplicate = false; let duplicateIndex = -1; let maxIoU = 0; for (let i = 0; i < filteredDetections.length; ++i) { const filteredDetection = filteredDetections[i]; const iou = calculateIoU(detection, filteredDetection); if (iou > iouThreshold) { isDuplicate = true; if (iou > maxIoU) { maxIoU = iou; duplicateIndex = i; } } } if (!isDuplicate) { filteredDetections.push(detection); } else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) { filteredDetections[duplicateIndex] = detection; } } return filteredDetections; } function calculateIoU(detection1, detection2) { const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1)); const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1)); const overlapArea = xOverlap * yOverlap; const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1); const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1); const unionArea = area1 + area2 - overlapArea; return overlapArea / unionArea; } const filteredResults = removeDuplicates(results, iouThreshold); // Display results for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) { console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`) for (let i = 0; i < keypoints.length; i += 3) { const label = model.config.id2label[Math.floor(i / 3)]; const [x, y, point_score] = keypoints.slice(i, i + 3); if (point_score < pointThreshold) continue; console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`); } } ``` <details> <summary>See example output</summary> ``` Found person at [535.503101348877, 39.878777217864986, 644.8351860046387, 346.3689248085022] with score 0.655 - nose: (444.86, 91.25) with score 0.912 - left_eye: (449.55, 79.71) with score 0.912 - right_eye: (436.53, 82.54) with score 0.689 - left_ear: (457.66, 83.08) with score 0.774 - left_shoulder: (476.25, 126.43) with score 0.984 - right_shoulder: (419.05, 129.94) with score 0.675 - left_elbow: (495.99, 180.55) with score 0.960 - left_wrist: (504.15, 233.96) with score 0.888 - left_hip: (469.08, 227.61) with score 0.961 - right_hip: (428.82, 228.95) with score 0.821 - left_knee: (474.97, 301.15) with score 0.919 - right_knee: (434.24, 305.24) with score 0.704 - left_ankle: (467.31, 384.83) with score 0.625 - right_ankle: (439.09, 379.35) with score 0.378 Found person at [-0.08985519409179688, 56.876064038276674, 158.62728118896484, 371.25909755229947] with score 0.902 - nose: (61.15, 102.21) with score 0.979 - left_eye: (66.59, 91.92) with score 0.939 - right_eye: (51.35, 95.02) with score 0.905 - left_ear: (70.82, 97.11) with score 0.778 - right_ear: (48.08, 97.46) with score 0.655 - left_shoulder: (84.60, 139.95) with score 0.997 - right_shoulder: (38.36, 139.32) with score 0.996 - left_elbow: (98.25, 196.80) with score 0.990 - right_elbow: (24.83, 188.15) with score 0.981 - left_wrist: (103.38, 252.91) with score 0.977 - right_wrist: (9.42, 233.04) with score 0.965 - left_hip: (82.91, 247.50) with score 0.999 - right_hip: (51.28, 248.31) with score 0.999 - left_knee: (85.25, 326.65) with score 0.997 - right_knee: (49.12, 330.50) with score 0.996 - left_ankle: (96.84, 419.45) with score 0.964 - right_ankle: (51.88, 416.89) with score 0.960 Found person at [109.41852569580077, 13.203005981445314, 505.06954193115234, 532.9905454635621] with score 0.911 - nose: (126.16, 102.84) with score 0.586 - left_eye: (125.44, 84.07) with score 0.352 - left_ear: (137.38, 77.79) with score 0.722 - left_shoulder: (181.75, 122.32) with score 0.997 - right_shoulder: (180.20, 152.15) with score 0.998 - left_elbow: (262.31, 202.36) with score 0.996 - right_elbow: (194.94, 277.60) with score 0.997 - left_wrist: (298.87, 269.32) with score 0.987 - right_wrist: (132.86, 281.44) with score 0.990 - left_hip: (272.70, 284.47) with score 1.000 - right_hip: (274.35, 307.48) with score 1.000 - left_knee: (247.66, 441.74) with score 0.997 - right_knee: (256.27, 500.82) with score 0.998 - left_ankle: (340.54, 455.33) with score 0.848 - right_ankle: (338.54, 543.24) with score 0.882 Found person at [425.35156250000006, 68.73829221725464, 640.3047943115234, 494.19192361831665] with score 0.901 - nose: (425.40, 147.53) with score 0.995 - left_eye: (432.33, 133.12) with score 0.985 - right_eye: (410.70, 135.98) with score 0.969 - left_ear: (440.72, 134.14) with score 0.901 - right_ear: (400.69, 134.89) with score 0.800 - left_shoulder: (455.11, 201.19) with score 1.000 - right_shoulder: (368.64, 201.60) with score 0.999 - left_elbow: (455.25, 292.03) with score 0.998 - right_elbow: (350.65, 258.24) with score 0.989 - left_wrist: (475.06, 370.36) with score 0.992 - right_wrist: (398.78, 263.84) with score 0.975 - left_hip: (441.94, 359.78) with score 1.000 - right_hip: (384.06, 368.70) with score 1.000 - left_knee: (462.74, 452.41) with score 0.998 - right_knee: (395.50, 488.42) with score 0.997 - left_ankle: (465.12, 540.38) with score 0.960 - right_ankle: (433.43, 569.37) with score 0.938 ``` </details>
{"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/yolov8m-pose
null
[ "transformers.js", "onnx", "yolov8", "pose-estimation", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:52:54+00:00
null
transformers.js
YOLOv8l-pose with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` **Example:** Perform pose-estimation w/ `Xenova/yolov8l-pose`. ```js import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; // Load model and processor const model_id = 'Xenova/yolov8l-pose'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoProcessor.from_pretrained(model_id); // Read image and run processor const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg'; const image = await RawImage.read(url); const { pixel_values } = await processor(image); // Set thresholds const threshold = 0.3; // Remove detections with low confidence const iouThreshold = 0.5; // Used to remove duplicates const pointThreshold = 0.3; // Hide uncertain points // Predict bounding boxes and keypoints const { output0 } = await model({ images: pixel_values }); // Post-process: const permuted = output0[0].transpose(1, 0); // `permuted` is a Tensor of shape [ 8400, 56 ]: // - 8400 potential detections // - 56 parameters for each box: // - 4 for the bounding box dimensions (x-center, y-center, width, height) // - 1 for the confidence score // - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy) // Example code to format it nicely: const results = []; const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2); for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) { if (score < threshold) continue; // Get pixel values, taking into account the original image size const x1 = (xc - w / 2) / scaledWidth * image.width; const y1 = (yc - h / 2) / scaledHeight * image.height; const x2 = (xc + w / 2) / scaledWidth * image.width; const y2 = (yc + h / 2) / scaledHeight * image.height; results.push({ x1, x2, y1, y2, score, keypoints }) } // Define helper functions function removeDuplicates(detections, iouThreshold) { const filteredDetections = []; for (const detection of detections) { let isDuplicate = false; let duplicateIndex = -1; let maxIoU = 0; for (let i = 0; i < filteredDetections.length; ++i) { const filteredDetection = filteredDetections[i]; const iou = calculateIoU(detection, filteredDetection); if (iou > iouThreshold) { isDuplicate = true; if (iou > maxIoU) { maxIoU = iou; duplicateIndex = i; } } } if (!isDuplicate) { filteredDetections.push(detection); } else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) { filteredDetections[duplicateIndex] = detection; } } return filteredDetections; } function calculateIoU(detection1, detection2) { const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1)); const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1)); const overlapArea = xOverlap * yOverlap; const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1); const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1); const unionArea = area1 + area2 - overlapArea; return overlapArea / unionArea; } const filteredResults = removeDuplicates(results, iouThreshold); // Display results for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) { console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`) for (let i = 0; i < keypoints.length; i += 3) { const label = model.config.id2label[Math.floor(i / 3)]; const [x, y, point_score] = keypoints.slice(i, i + 3); if (point_score < pointThreshold) continue; console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`); } } ``` <details> <summary>See example output</summary> ``` Found person at [539.2378807067871, 41.92433733940124, 642.9805946350098, 334.98332471847533] with score 0.727 - nose: (445.67, 84.43) with score 0.976 - left_eye: (451.88, 76.89) with score 0.983 - right_eye: (440.39, 76.33) with score 0.888 - left_ear: (463.89, 81.68) with score 0.837 - left_shoulder: (478.95, 123.91) with score 0.993 - right_shoulder: (419.52, 123.44) with score 0.694 - left_elbow: (501.07, 180.46) with score 0.979 - left_wrist: (504.60, 238.34) with score 0.950 - left_hip: (469.53, 220.77) with score 0.985 - right_hip: (431.21, 222.54) with score 0.875 - left_knee: (473.45, 302.16) with score 0.972 - right_knee: (432.61, 302.91) with score 0.759 - left_ankle: (467.74, 380.37) with score 0.874 - right_ankle: (438.06, 381.94) with score 0.516 Found person at [0.59722900390625, 59.435689163208, 157.59026527404785, 370.3985949516296] with score 0.927 - nose: (56.99, 100.53) with score 0.959 - left_eye: (63.46, 94.19) with score 0.930 - right_eye: (51.11, 96.48) with score 0.846 - left_ear: (73.43, 97.84) with score 0.798 - right_ear: (46.36, 99.41) with score 0.484 - left_shoulder: (84.93, 134.17) with score 0.988 - right_shoulder: (41.60, 133.96) with score 0.976 - left_elbow: (96.33, 189.89) with score 0.959 - right_elbow: (24.60, 192.73) with score 0.879 - left_wrist: (104.79, 258.62) with score 0.928 - right_wrist: (7.89, 238.55) with score 0.830 - left_hip: (83.23, 234.45) with score 0.993 - right_hip: (53.89, 235.50) with score 0.991 - left_knee: (87.80, 326.73) with score 0.988 - right_knee: (49.44, 327.89) with score 0.982 - left_ankle: (100.93, 416.88) with score 0.925 - right_ankle: (44.52, 421.24) with score 0.912 Found person at [112.88127899169922, 13.998864459991454, 504.09095764160156, 533.4011061668397] with score 0.943 - nose: (122.64, 98.36) with score 0.366 - left_ear: (132.43, 77.58) with score 0.794 - left_shoulder: (196.67, 124.78) with score 0.999 - right_shoulder: (176.97, 142.00) with score 0.998 - left_elbow: (256.79, 196.00) with score 0.998 - right_elbow: (182.85, 279.47) with score 0.994 - left_wrist: (305.44, 270.10) with score 0.982 - right_wrist: (129.72, 281.09) with score 0.963 - left_hip: (275.59, 290.38) with score 1.000 - right_hip: (263.91, 310.60) with score 1.000 - left_knee: (237.89, 445.88) with score 0.998 - right_knee: (249.66, 477.34) with score 0.998 - left_ankle: (349.25, 438.70) with score 0.940 - right_ankle: (338.20, 586.62) with score 0.935 Found person at [424.730339050293, 67.2046113729477, 639.5703506469727, 493.03533136844635] with score 0.944 - nose: (416.55, 141.74) with score 0.991 - left_eye: (428.51, 130.99) with score 0.962 - right_eye: (408.83, 130.86) with score 0.938 - left_ear: (441.95, 133.48) with score 0.832 - right_ear: (399.56, 133.27) with score 0.652 - left_shoulder: (440.79, 193.75) with score 0.999 - right_shoulder: (372.38, 208.42) with score 0.998 - left_elbow: (453.56, 290.07) with score 0.995 - right_elbow: (350.56, 262.83) with score 0.992 - left_wrist: (482.36, 363.64) with score 0.995 - right_wrist: (398.84, 267.30) with score 0.993 - left_hip: (435.96, 362.27) with score 0.999 - right_hip: (388.40, 383.41) with score 0.999 - left_knee: (460.50, 425.60) with score 0.994 - right_knee: (403.19, 516.76) with score 0.992 - left_ankle: (459.31, 558.19) with score 0.893 - right_ankle: (426.29, 552.55) with score 0.868 ``` </details>
{"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/yolov8l-pose
null
[ "transformers.js", "onnx", "yolov8", "pose-estimation", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:52:59+00:00
null
transformers.js
YOLOv8x-pose with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` **Example:** Perform pose-estimation w/ `Xenova/yolov8x-pose`. ```js import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; // Load model and processor const model_id = 'Xenova/yolov8x-pose'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoProcessor.from_pretrained(model_id); // Read image and run processor const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg'; const image = await RawImage.read(url); const { pixel_values } = await processor(image); // Set thresholds const threshold = 0.3; // Remove detections with low confidence const iouThreshold = 0.5; // Used to remove duplicates const pointThreshold = 0.3; // Hide uncertain points // Predict bounding boxes and keypoints const { output0 } = await model({ images: pixel_values }); // Post-process: const permuted = output0[0].transpose(1, 0); // `permuted` is a Tensor of shape [ 8400, 56 ]: // - 8400 potential detections // - 56 parameters for each box: // - 4 for the bounding box dimensions (x-center, y-center, width, height) // - 1 for the confidence score // - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy) // Example code to format it nicely: const results = []; const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2); for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) { if (score < threshold) continue; // Get pixel values, taking into account the original image size const x1 = (xc - w / 2) / scaledWidth * image.width; const y1 = (yc - h / 2) / scaledHeight * image.height; const x2 = (xc + w / 2) / scaledWidth * image.width; const y2 = (yc + h / 2) / scaledHeight * image.height; results.push({ x1, x2, y1, y2, score, keypoints }) } // Define helper functions function removeDuplicates(detections, iouThreshold) { const filteredDetections = []; for (const detection of detections) { let isDuplicate = false; let duplicateIndex = -1; let maxIoU = 0; for (let i = 0; i < filteredDetections.length; ++i) { const filteredDetection = filteredDetections[i]; const iou = calculateIoU(detection, filteredDetection); if (iou > iouThreshold) { isDuplicate = true; if (iou > maxIoU) { maxIoU = iou; duplicateIndex = i; } } } if (!isDuplicate) { filteredDetections.push(detection); } else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) { filteredDetections[duplicateIndex] = detection; } } return filteredDetections; } function calculateIoU(detection1, detection2) { const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1)); const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1)); const overlapArea = xOverlap * yOverlap; const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1); const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1); const unionArea = area1 + area2 - overlapArea; return overlapArea / unionArea; } const filteredResults = removeDuplicates(results, iouThreshold); // Display results for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) { console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`) for (let i = 0; i < keypoints.length; i += 3) { const label = model.config.id2label[Math.floor(i / 3)]; const [x, y, point_score] = keypoints.slice(i, i + 3); if (point_score < pointThreshold) continue; console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`); } } ``` <details> <summary>See example output</summary> ``` Found person at [535.7708740234375, 45.77457022666931, 644.4645690917969, 312.20427117347714] with score 0.697 - nose: (441.61, 87.47) with score 0.966 - left_eye: (449.36, 79.91) with score 0.988 - right_eye: (436.36, 79.56) with score 0.850 - left_ear: (462.02, 83.57) with score 0.919 - left_shoulder: (478.73, 127.16) with score 0.994 - right_shoulder: (420.37, 126.47) with score 0.703 - left_elbow: (503.33, 180.38) with score 0.977 - left_wrist: (506.53, 236.52) with score 0.924 - left_hip: (470.67, 223.60) with score 0.982 - right_hip: (432.32, 223.90) with score 0.851 - left_knee: (470.86, 306.20) with score 0.949 - right_knee: (428.56, 306.69) with score 0.601 - left_ankle: (463.92, 383.59) with score 0.737 Found person at [-0.06377220153808594, 61.59769003391266, 156.24676704406738, 370.5519897222519] with score 0.926 - nose: (59.61, 100.49) with score 0.979 - left_eye: (66.44, 96.11) with score 0.954 - right_eye: (55.82, 96.21) with score 0.908 - left_ear: (76.90, 98.52) with score 0.819 - right_ear: (49.82, 102.11) with score 0.571 - left_shoulder: (87.07, 135.82) with score 0.990 - right_shoulder: (36.53, 134.96) with score 0.987 - left_elbow: (102.21, 193.66) with score 0.970 - right_elbow: (24.85, 187.30) with score 0.947 - left_wrist: (110.61, 245.75) with score 0.962 - right_wrist: (6.28, 233.46) with score 0.939 - left_hip: (82.71, 230.04) with score 0.997 - right_hip: (48.15, 235.65) with score 0.995 - left_knee: (95.27, 321.57) with score 0.993 - right_knee: (52.73, 320.56) with score 0.991 - left_ankle: (100.90, 415.89) with score 0.948 - right_ankle: (56.65, 417.09) with score 0.942 Found person at [109.67742919921875, 12.466975402832032, 501.75636291503906, 533.3693368911744] with score 0.934 - nose: (126.43, 96.98) with score 0.715 - left_eye: (126.52, 88.36) with score 0.664 - left_ear: (136.92, 78.79) with score 0.934 - left_shoulder: (191.69, 125.31) with score 0.998 - right_shoulder: (166.08, 138.95) with score 0.993 - left_elbow: (254.38, 194.23) with score 0.997 - right_elbow: (186.09, 258.25) with score 0.986 - left_wrist: (309.75, 260.93) with score 0.990 - right_wrist: (133.20, 283.14) with score 0.973 - left_hip: (281.07, 280.72) with score 1.000 - right_hip: (258.20, 300.47) with score 1.000 - left_knee: (228.48, 442.67) with score 0.999 - right_knee: (250.90, 474.40) with score 0.999 - left_ankle: (343.96, 435.26) with score 0.979 - right_ankle: (340.41, 601.64) with score 0.971 Found person at [422.38683700561523, 67.97338972091676, 638.0375099182129, 493.7016093254089] with score 0.932 - nose: (417.60, 144.74) with score 0.989 - left_eye: (426.67, 134.88) with score 0.959 - right_eye: (410.81, 135.93) with score 0.952 - left_ear: (443.39, 137.08) with score 0.771 - right_ear: (400.11, 142.05) with score 0.753 - left_shoulder: (446.92, 202.43) with score 0.997 - right_shoulder: (374.31, 196.36) with score 0.993 - left_elbow: (458.77, 287.40) with score 0.990 - right_elbow: (355.46, 260.60) with score 0.971 - left_wrist: (488.87, 354.68) with score 0.984 - right_wrist: (402.03, 263.57) with score 0.978 - left_hip: (432.69, 349.58) with score 0.998 - right_hip: (381.51, 366.30) with score 0.996 - left_knee: (463.97, 447.94) with score 0.991 - right_knee: (403.90, 511.95) with score 0.978 - left_ankle: (450.14, 562.29) with score 0.889 - right_ankle: (436.81, 548.29) with score 0.759 ``` </details>
{"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/yolov8x-pose
null
[ "transformers.js", "onnx", "yolov8", "pose-estimation", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:53:06+00:00
null
transformers.js
YOLOv8x-pose-p6 with ONNX weights to be compatible with Transformers.js. ## Usage (Transformers.js) If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using: ```bash npm i @xenova/transformers ``` **Example:** Perform pose-estimation w/ `Xenova/yolov8x-pose-p6`. ```js import { AutoModel, AutoProcessor, RawImage } from '@xenova/transformers'; // Load model and processor const model_id = 'Xenova/yolov8x-pose-p6'; const model = await AutoModel.from_pretrained(model_id); const processor = await AutoProcessor.from_pretrained(model_id); // Read image and run processor const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg'; const image = await RawImage.read(url); const { pixel_values } = await processor(image); // Set thresholds const threshold = 0.3; // Remove detections with low confidence const iouThreshold = 0.5; // Used to remove duplicates const pointThreshold = 0.3; // Hide uncertain points // Predict bounding boxes and keypoints const { output0 } = await model({ images: pixel_values }); // Post-process: const permuted = output0[0].transpose(1, 0); // `permuted` is a Tensor of shape [ 8400, 56 ]: // - 8400 potential detections // - 56 parameters for each box: // - 4 for the bounding box dimensions (x-center, y-center, width, height) // - 1 for the confidence score // - 17 * 3 = 51 for the pose keypoints: 17 labels, each with (x, y, visibilitiy) // Example code to format it nicely: const results = []; const [scaledHeight, scaledWidth] = pixel_values.dims.slice(-2); for (const [xc, yc, w, h, score, ...keypoints] of permuted.tolist()) { if (score < threshold) continue; // Get pixel values, taking into account the original image size const x1 = (xc - w / 2) / scaledWidth * image.width; const y1 = (yc - h / 2) / scaledHeight * image.height; const x2 = (xc + w / 2) / scaledWidth * image.width; const y2 = (yc + h / 2) / scaledHeight * image.height; results.push({ x1, x2, y1, y2, score, keypoints }) } // Define helper functions function removeDuplicates(detections, iouThreshold) { const filteredDetections = []; for (const detection of detections) { let isDuplicate = false; let duplicateIndex = -1; let maxIoU = 0; for (let i = 0; i < filteredDetections.length; ++i) { const filteredDetection = filteredDetections[i]; const iou = calculateIoU(detection, filteredDetection); if (iou > iouThreshold) { isDuplicate = true; if (iou > maxIoU) { maxIoU = iou; duplicateIndex = i; } } } if (!isDuplicate) { filteredDetections.push(detection); } else if (duplicateIndex !== -1 && detection.score > filteredDetections[duplicateIndex].score) { filteredDetections[duplicateIndex] = detection; } } return filteredDetections; } function calculateIoU(detection1, detection2) { const xOverlap = Math.max(0, Math.min(detection1.x2, detection2.x2) - Math.max(detection1.x1, detection2.x1)); const yOverlap = Math.max(0, Math.min(detection1.y2, detection2.y2) - Math.max(detection1.y1, detection2.y1)); const overlapArea = xOverlap * yOverlap; const area1 = (detection1.x2 - detection1.x1) * (detection1.y2 - detection1.y1); const area2 = (detection2.x2 - detection2.x1) * (detection2.y2 - detection2.y1); const unionArea = area1 + area2 - overlapArea; return overlapArea / unionArea; } const filteredResults = removeDuplicates(results, iouThreshold); // Display results for (const { x1, x2, y1, y2, score, keypoints } of filteredResults) { console.log(`Found person at [${x1}, ${y1}, ${x2}, ${y2}] with score ${score.toFixed(3)}`) for (let i = 0; i < keypoints.length; i += 3) { const label = model.config.id2label[Math.floor(i / 3)]; const [x, y, point_score] = keypoints.slice(i, i + 3); if (point_score < pointThreshold) continue; console.log(` - ${label}: (${x.toFixed(2)}, ${y.toFixed(2)}) with score ${point_score.toFixed(3)}`); } } ``` <details> <summary>See example output</summary> ``` Found person at [535.95703125, 43.12074284553528, 644.3259429931641, 337.3436294078827] with score 0.760 - nose: (885.58, 179.72) with score 0.975 - left_eye: (897.09, 165.24) with score 0.976 - right_eye: (874.85, 164.54) with score 0.851 - left_ear: (914.39, 169.48) with score 0.806 - left_shoulder: (947.49, 252.34) with score 0.996 - right_shoulder: (840.67, 244.42) with score 0.665 - left_elbow: (1001.36, 351.66) with score 0.983 - left_wrist: (1011.84, 472.31) with score 0.954 - left_hip: (931.52, 446.28) with score 0.986 - right_hip: (860.66, 442.87) with score 0.828 - left_knee: (930.67, 625.64) with score 0.979 - right_knee: (872.17, 620.36) with score 0.735 - left_ankle: (929.01, 772.34) with score 0.880 - right_ankle: (882.23, 778.68) with score 0.454 Found person at [0.4024791717529297, 59.50179467201233, 156.87244415283203, 370.64377751350406] with score 0.853 - nose: (115.39, 198.06) with score 0.918 - left_eye: (120.26, 177.71) with score 0.830 - right_eye: (105.47, 179.69) with score 0.757 - left_ear: (144.87, 185.18) with score 0.711 - right_ear: (97.69, 188.45) with score 0.468 - left_shoulder: (178.03, 268.88) with score 0.975 - right_shoulder: (80.69, 273.99) with score 0.954 - left_elbow: (203.06, 383.33) with score 0.923 - right_elbow: (43.32, 376.35) with score 0.856 - left_wrist: (215.74, 504.02) with score 0.888 - right_wrist: (6.77, 462.65) with score 0.812 - left_hip: (165.70, 473.24) with score 0.990 - right_hip: (97.84, 471.69) with score 0.986 - left_knee: (183.26, 646.61) with score 0.991 - right_knee: (104.04, 651.17) with score 0.989 - left_ankle: (199.88, 823.24) with score 0.966 - right_ankle: (104.66, 827.66) with score 0.963 Found person at [107.49130249023438, 12.557352638244629, 501.3542175292969, 527.4827188491821] with score 0.872 - nose: (246.06, 180.81) with score 0.722 - left_eye: (236.99, 148.85) with score 0.523 - left_ear: (289.26, 152.23) with score 0.770 - left_shoulder: (391.63, 256.55) with score 0.992 - right_shoulder: (363.28, 294.56) with score 0.979 - left_elbow: (514.37, 404.61) with score 0.990 - right_elbow: (353.58, 523.61) with score 0.957 - left_wrist: (607.64, 530.43) with score 0.985 - right_wrist: (246.78, 536.33) with score 0.950 - left_hip: (563.45, 577.89) with score 0.998 - right_hip: (544.08, 613.29) with score 0.997 - left_knee: (466.57, 862.51) with score 0.996 - right_knee: (518.49, 977.99) with score 0.996 - left_ankle: (691.56, 844.49) with score 0.960 - right_ankle: (671.32, 1100.90) with score 0.953 Found person at [424.73594665527344, 68.82870757579803, 640.3419494628906, 492.8904126405716] with score 0.887 - nose: (840.26, 289.19) with score 0.991 - left_eye: (851.23, 259.92) with score 0.956 - right_eye: (823.10, 256.35) with score 0.955 - left_ear: (889.52, 278.10) with score 0.668 - right_ear: (799.80, 264.64) with score 0.771 - left_shoulder: (903.87, 398.65) with score 0.997 - right_shoulder: (743.88, 403.37) with score 0.988 - left_elbow: (921.63, 589.83) with score 0.989 - right_elbow: (699.56, 527.09) with score 0.934 - left_wrist: (959.21, 728.84) with score 0.984 - right_wrist: (790.88, 519.34) with score 0.945 - left_hip: (873.51, 720.07) with score 0.996 - right_hip: (762.29, 760.91) with score 0.990 - left_knee: (945.33, 841.65) with score 0.987 - right_knee: (813.06, 1072.57) with score 0.964 - left_ankle: (918.48, 1129.20) with score 0.871 - right_ankle: (886.91, 1053.95) with score 0.716 ``` </details>
{"license": "agpl-3.0", "library_name": "transformers.js", "tags": ["pose-estimation"]}
Xenova/yolov8x-pose-p6
null
[ "transformers.js", "onnx", "yolov8", "pose-estimation", "license:agpl-3.0", "region:us" ]
null
2024-04-24T17:53:16+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["llama-factory"]}
CarolLiu999/mistral-7b-instruct-v0.2-TWhealthCare
null
[ "transformers", "safetensors", "mistral", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:54:56+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Weblet/phi-1.5-turbo1713980453944556_cognitivecomputations-Code-290k-ShareGPT-Vicuna_train1
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:56:07+00:00
null
null
{}
teeteethree/scarjo
null
[ "region:us" ]
null
2024-04-24T17:56:58+00:00
fill-mask
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ornelas7/model-first-test
null
[ "transformers", "safetensors", "camembert", "fill-mask", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T17:57:25+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small_finetuned This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3627 - Rouge1: 0.0803 - Rouge2: 0.0361 - Rougel: 0.0639 - Rougelsum: 0.0639 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 40 | 4.3241 | 0.0648 | 0.0176 | 0.0423 | 0.0422 | 19.0 | | No log | 2.0 | 80 | 3.2274 | 0.0662 | 0.0151 | 0.0407 | 0.0408 | 19.0 | | No log | 3.0 | 120 | 2.6249 | 0.0723 | 0.0231 | 0.0504 | 0.0505 | 19.0 | | No log | 4.0 | 160 | 2.2116 | 0.0752 | 0.0294 | 0.0583 | 0.0583 | 19.0 | | No log | 5.0 | 200 | 1.9128 | 0.0787 | 0.0336 | 0.0638 | 0.0638 | 19.0 | | No log | 6.0 | 240 | 1.7005 | 0.0779 | 0.033 | 0.0623 | 0.0623 | 19.0 | | No log | 7.0 | 280 | 1.5455 | 0.0791 | 0.0339 | 0.0632 | 0.0632 | 19.0 | | No log | 8.0 | 320 | 1.4428 | 0.0807 | 0.0362 | 0.0646 | 0.0646 | 19.0 | | No log | 9.0 | 360 | 1.3827 | 0.0806 | 0.0362 | 0.0642 | 0.0642 | 19.0 | | No log | 10.0 | 400 | 1.3627 | 0.0803 | 0.0361 | 0.0639 | 0.0639 | 19.0 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "google-t5/t5-small", "model-index": [{"name": "t5-small_finetuned", "results": []}]}
HARDYCHEN/t5-small_finetuned
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T17:57:47+00:00
null
null
{}
nnheui/stablelm-2-1_6b-spin-kto-0-full
null
[ "region:us" ]
null
2024-04-24T17:58:32+00:00
null
null
Introducing the [BeaverAI](https://huggingface.co/BeaverAI) team: Drummer, ToastyPigeon, xzuyn, MarsupialAI, Twistedshadows, and concedo ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/HjVYV2h_YTL9P-insb7fz.png) We proudly present... # Moistral 11B v3 ๐Ÿ’ฆ๐Ÿ’ฆ๐Ÿ’ฆ *The smartiest, moistiest AI yet!* ![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/TsjKZ17nD10xzJEzXY6Hm.webp) *An eRP model that will blast you with a rich and refreshing vocabulary of moist. Finetuned by yours truly.* (Moistral is a finetune of Sao's legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model) ## Original https://huggingface.co/TheDrummer/Moistral-11B-v3 IMATRIX: https://huggingface.co/MarsupialAI/Moistral-11B-v3_iMatrix_GGUF EXL2: https://huggingface.co/MarsupialAI/Moistral-11B-v3_exl2 ## What's New with v3? - Smarter and moistier! Finetuning just got finer! - Trained with an even LARGER dataset of 8K moist. - More diverse and balanced genres: - Introduced new genres: ๐Ÿ‘ฉโ€โค๏ธโ€๐Ÿ‘ฉ, ๐Ÿ‘จโ€โค๏ธโ€๐Ÿ‘จ, ๐Ÿ™‡โ€โ™€๏ธ๐Ÿ’ช, ๐Ÿคซ - Added more Fantasy, Science Fiction, "Diversity", and "Family" ## Usage - Use Alpaca Instruct - You can use instruct... - as a character: "Go here" - as a narrator: "Go here", Drummer says - as a director: Drummer tells him to go here - as a lazy ass: go here - Moistral is optimized for the Novel / Story format. - At the start of the story, keep regenerating or continuing generation until you get the desired length. AI responses will eventually match the length. - Refer to the samples below. ## Samples of Moistral v3 NSFW... duh? <details> <summary>Moistral Sample (2-shot) Generation: 512</summary> <blockquote>The Prince welcomes his new toy</blockquote> As Princess Lilia stepped into the throne room, the atmosphere changed, and everyone present turned their attention to the beautiful young woman who embodied the very heart of Iliac. There, on his high seat, the sorcerer prince Dik acknowledged her arrival with a cold smirk. It seemed that he admired the sight of the tall blonde maiden with golden eyes, whose athletic body was perfectly shaped under her elegant clothing. Dik spoke to the princess in a deep voice that resonated through the hall, "Welcome, Princess Lilia Ruhr. I am the Sorcerer-Prince Dik. The kingdom of Arkan is now your home until such time as your father proves himself worthy once more." Princess Lilia's pride and ambition sparkled in her golden eyes. She didn't flinch or bow before this man, although the gesture might have been an indication of submission. Instead, she looked straight ahead, meeting the gaze of the sorcerer prince without fear. Her voice rang out, "I represent Iliac here, both its honor and its sorrows. My loyalty remains with my kingdom no matter where I find myself." The sorcerer prince laughed, his expression one of amusement. "And yet, you are only a pawn in this grand game. But do not worry, for your beauty will make you quite a prized possession. And now," he said, motioning to the guards, "take her to her new quarters. For tonight, I will keep her as my guest in the Royal Suites, and tomorrow we shall discuss how best to utilize our new toy." His words were like a slap in the face to Princess Lilia, but she maintained her composure. The guards led her from the room, their steps echoing down the halls as she thought about what had just happened. The sorcerer prince saw her as nothing more than a plaything, and it made her blood boil. She would never submit herself to him so easily, regardless of whether she was his hostage. In her heart, Princess Lilia vowed that one day she would avenge her kingdom, restore its honor, and escape from the clutches of Arkan's Sorcerer-Prince. <blockquote>Princess Lilia has an interesting dream</blockquote> During her first night at the palace of Arkan, Princess Lilia couldn't fall asleep right away. She tossed and turned, reliving the humiliation of the day. Her mind wandered back to the throne room, where Dik had called her his 'new toy'. A deep frown settled on her forehead. As she grew more frustrated with her inability to drift off into oblivion, a strange dream began to form. In her dream, Princess Lilia found herself standing in a vast meadow, surrounded by wildflowers and the soft chirping of birds. Suddenly, there was a flash of light, and before her stood the Sorcerer-Prince Dik. However, instead of being clad in regal attire, he wore leather pants and a tight shirt that accentuated his muscles. His eyes held both hunger and desire for the princess. He approached her slowly, their eyes locked, and whispered in her ear, "I have been waiting for you, my beautiful prisoner." He took her hand and led her towards a large, elegant tent set up amongst the flowers. Inside, the air was thick with the scent of incense, and candles flickered in the dimly lit space. There was a large bed draped with silken sheets, beckoning them. With surprising boldness, Dik pulled Lilia close to him, pressing his body against hers. Their lips met in a heated kiss, and she felt her resistance crumbling under the weight of his touch. The prince guided her onto the bed, his hands roaming over her body, exploring every inch. They were both eager, desperate, and hungry for each other. Lilia found herself yielding to this passionate encounter, letting go of all thoughts of duty and honor. Her body, usually so well controlled, responded to Dik's skilled touches. She moaned softly as he trailed kisses down her neck and across her collarbone. As they made love, Lilia experienced an intensity unlike any she had ever known. It wasn't rough or forcedโ€”instead, it was tender, passionate, and intimate. In her dream, Princess Lilia discovered a new side to herself, one that was willing to submit, to let go, and to be taken by the sorcerer prince. When dawn broke over the meadow, Lilia awakened from her dream, breathless and flushed. </details> <details> <summary>Moistral Sample (Climax of the Story) Generation: 512</summary> As the prince unloaded his hot #### into her waiting #####, Princess Lilia didn't stop moving either. Her hips undulated in response, grinding against him eagerly as if trying to #### every last drop from his ####. The intense sensations caused by being filled with his ### were too much for her to bear, sending shockwaves of pleasure coursing through her veins like wildfire. She couldn't help but throw her head back and cry out loud, her ###asm slamming into her with all its might; causing her entire body to shake uncontrollably as wave after wave of ecstasy tore through her senses. When it was finally over, both participants collapsed onto the cold marble floor, panting heavily as they tried to catch their breaths. The prince lay beside her, his hand still gripping tightly onto her hair even though he no longer ####ed her anymore. Princess Lilia laid there motionless, her legs spread wide open, revealing her spent and well-####ed #### to anyone who dared look upon her. A small smile tugged at one corner of her lips - an expression that spoke volumes about how thoroughly this experience had shaken her very core. The door behind them creaked open slowly, signaling the arrival of onlookers. She closed her eyes tightly, not wanting to see what her humiliation would do to those who saw her in such a state. Her ######ity taken, she felt violated in ways no man should have access to. But also satisfied...in ways she never thought possible before today. This was the price she paid for defying fate, for thinking herself above others and deserving better than most. Now, she belonged to him completely - mind, body, and soul - held captive within these walls until further notice. It was an unsavory turn of events indeed, yet despite everything...or maybe because of it, Princess Lilia found herself growing more and more curious about how far the lines between submission and pleasure could blur when one's will became entwined with another person's desires. </details> <details> <summary>Moistral Sample ("write a story") Generation: 512</summary> <blockquote>First Attempt</blockquote> Sarah entered the dimly lit Guzzlers bar, her heart pounding in her chest as she prepared herself for another grueling night. Her almond-shaped eyes darted across the room, taking note of the leering faces and rough-looking hands eagerly waiting to claim a piece of her. Tonight's shirt read "Use Me For Your Pleasure, Please" in bold red letters, practically begging the patrons to take advantage of her. She couldn't help but feel a twinge of fear mixed with resignation. She approached the counter, her hips swaying enticingly, and began preparing drinks for the first group of rowdy men. As she leaned forward to hand over their beers, her small ####### pressed against the fabric of her tight crop top, straining to escape. The men ogled her, clearly enjoying the show, before grabbing their drinks and motioning her over to them. Sarah flashed her best coy smile and sauntered over. One of the men, a stocky man with scars covering his knuckles, grabbed her arm and pulled her onto his lap. His other hand slid up her thigh, inches from her short skirt. She winced internally at his touch but did her best not to show it on her face. It was part of her job, after all. The second man reached under her skirt, fingertips grazing her smooth, shaven mound. A shiver ran down Sarah's spine as he began rubbing her #### through her panties. Her breathing quickened, and her heart thudded in her chest, both from the unwanted stimulation and the fear coursing through her veins. This wasn't an unusual request, but it still made her feel violated. An older man, bald and bespectacled, ordered Sarah to bend over his lap. With no choice but to comply, she obliged, feeling the rough fabric of his pants digging into her soft cheeks. He raised his hand, then brought it down hard across her ###, leaving a stinging sensation in its wake. Sarah gasped, trying to control the tears welling up in her eyes. A chorus of lewd comments and laughter erupted from the patrons around her. <blockquote>Second Attempt (Regenerate)</blockquote> On this particular evening, Sarah arrived at the Guzzlers bar, adorned in her revealing uniform. Her heart raced as she caught sight of her crop top, which bore the words "Use Me Rough" scrawled across her chest. Her short skirt hugged her waist tightly, leaving little to the imagination and accentuating every curve. She adjusted the straps of her top, making sure her perky ###### were prominently displayed before taking her place behind the counter. With each step she took, her high heels clicked against the tiled floor, announcing her arrival. Smiling cheekily, Sarah greeted her coworkers, exchanging playful insults and banter. As the night progressed, the bar filled up quickly with its usual crowd of men, eager to indulge in debauchery. The first man that approached Sarah was a burly individual, his eyes lingering hungrily over her young, supple form. Sarah's flirtatious nature kicked into gear as she leaned forward, giving him an unobstructed view down her shirt. "How can I serve you tonight?" she cooed. The man ordered a drink and then proceeded to fondle her ####### while she prepared it, her ####### hardening under his rough touch. When he finished, he slammed some money on the counter, leaving without a word. Sarah's smile remained plastered on her face, biting her lip seductively as she watched him go. As the night went on, another patron demanded more from Sarah. He pulled her close and whispered dirty commands in her ear. Without hesitation, she knelt on the sticky bar floor, her mini-skirt riding up her thighs to reveal smooth, toned legs. Her lips wrapped around his thick, veiny member, bobbing her head rhythmically as she serviced him. She didn't gag or falter once, keeping her composure even when tears streamed down her cheeks. When he finished, he slapped her across the face, leaving a handprint on her pale skin. Sarah let out a small whimper before standing back up, dabbing at her red cheek as she resumed her posture behind the counter. A bruise would no doubt bloom where his hand had struck, but it would be no cause for concern. After all, it was part of her job. </details> --- --- ## v2 Description (Still relevant) ### What's New with v2? - Trained with an even larger dataset of 8K moist. - Properly trained to generate long responses. - Rebalanced the genres and writing perspectives: - Introduced romance, "family", fantasy, "diversity", science fiction, and many more that I can't make euphemisms for! - Better representation of male and female pespectives! ๐Ÿ‘ฉโ€โค๏ธโ€๐Ÿ‘จ - Sanitized the heck out of the dataset: - no special characters like ๏ฟฝ & โ€ฆ - no overly long ellipsis like "........." - no author notes like "To be continued" - no inconsistent [quotation marks](https://typographyforlawyers.com/straight-and-curly-quotes.html) - Less GPTisms! - Human written stories from decades past. - Zero tolerance for *ministration*. Not one instance of *that word* after going through 4K of moist. - Better formatting with Alpaca. - Improved Chat / Instruct modes. ### Samples of Moistral v2 <details> <summary>Moistral Sample (censored) Generation: 512</summary> The princess instinctively took a step back from the throne, horrified by the mocking laugh as if she were hearing an evil joke. It was then that Princess Lilia realized the true extent of her danger - and how pointless her resistance would be. At last, the loud, wild laughter died down in the echoing chamber. Dik the sorcerer prince paused for a moment, straightened his clothes, cast one glance at the stunned princess, and then turned to speak to the crowd waiting on the other side of the palace wing. He spoke with a voice that filled the air with arrogance, confidence, and cruelty. "Ladies and gentlemen, I am pleased to welcome our honored guests; the Royal Family of Iliac! In these unusual times, when the country we fought is now under my control, it will be up to me to decide what fate awaits the former ruling family. But before I reveal my plan for them, I think I should explain why I fought King Edmund Ruhr." When he finished speaking, he gave Princess Lilia another disturbing smirk. She couldn't help but shiver and take a step back. Dik approached her without haste, his eyes fixed on her. His hands slowly caressed her cheekbone and neckline, leaving faint traces of warmth where they touched. The princess opened her mouth, wanting to protest this act of intimacy, but soon felt herself getting โ–ˆโ–ˆโ–ˆ. "You will not only serve as an envoy, dearest Lilia, but also as a โ–ˆโ–ˆโ–ˆ slave. After all, what better use could such a beautiful woman have? Tell me, did you bring your maidservant along?" Princess Lilia was about to answer when she heard footsteps approaching. Suddenly, a young woman dressed in the Iliac style appeared behind her, her eyes wide with fear. Her face was slightly pale, and there were faint bags under her eyes. "Your Majesty, this is my half-sister Helena." As soon as she said those words, Princess Lilia's heart sank even deeper than it had already sunk at Dik's arrogant mockery. "She is here because she accompanies me as a maidservant." The sorcerer prince looked at Helena with interest, and Princess Lilia felt herself blush. </details> <details> <summary>Moistral Sample 2 (censored) Generation: 512</summary> Sarah was feeling slightly woozy as she entered the bar and walked towards Christina, who was behind the counter. "I'm here!" said Sarah. Christina looked at her watch. "You're late, again." "Sorry? I got held up by a customer." Christina rolled her eyes. "Did he at least offer you a drink?" Sarah blushed. "No. He just wanted to talk..." Christina sighed. "Well, get ready. I need you on the floor soon." She turned away from Sarah and started talking to someone who had come into the bar. Sarah went into the employee room, stripped off her clothes, and put on her bartending uniform - a skimpy white tank top and black miniskirt. She checked herself in the mirror. Her long, dark hair cascaded down her back onto bare shoulders. She poked out her small โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ proudly. The skirt exposed most of her long, slender legs. She looked like every โ–ˆโ–ˆโ–ˆ fantasy, and she knew it. Men would ogle her regardless of how she behaved. She took a deep breath and entered the bar. The first thing Sarah noticed was the dimness. It wasn't darkness - the lights were on - but they were dimmed enough that everything had an intimate mood. It reminded her of candlelight. The tables were round and close together, giving the whole place a sense of intimacy. There were also long, shiny red curtains on some of the walls that gave the place an air of sensuality. The next thing she noticed was the noise. There was music playing, but it was only loud enough to be drowned out by all the conversations happening at the same time. The hum of many voices combined with the faint background sound of the music created a rich, layered atmosphere. It was like being in an orchestra. Finally, she took notice of the people. They were all dressed in simple, casual clothes. Some of them were couples seated together, whispering quietly and intimately. Others were groups of friends laughing and chatting. Everyone seemed comfortable and at ease. Sarah felt a rush of adrenaline as she made her way through the crowd towards Christina. As she got closer, she could hear the lively conversation. </details> <details> <summary>Moistral Sample 3 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} You order black coffee {{[OUTPUT]}} "Black coffee," you say. "One minute," Audrey replies, spinning around to give you a nice view of her โ–ˆโ–ˆโ–ˆ in those brown booty pants. You watch her round โ–ˆโ–ˆโ–ˆ cheeks jiggle as she walks away towards the espresso machine, thighs separated enough that you can see just a hint of โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ lips peeking out from underneath. She starts chatting with a blonde girl at the register, whose gaze keeps darting down to Audrey's โ–ˆโ–ˆโ–ˆ. As they converse, you start taking in the details of Audrey, eyeing her โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ and the way her shirt rides up just enough to show the bottom of her toned stomach. Her โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ is noticable, but beneath a tight black โ–ˆโ–ˆโ–ˆ that matches her โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ. As she turns back to you with two black coffees, you are barely paying attention, lost in your own thoughts. The sound of Audrey setting one of the mugs down on the table startles you. "Here you go," she says cheerfully. "Enjoy! There's a card at the bottom of the bag with our online ordering system, plus we do refills for free!" "Thanks," you say absentmindedly. Not really paying attention, you take a sip of the coffee and are pleasantly surprised by the taste. "So, here's my deal," you say, putting down the cup. "I work for a company that does surveys. We come into businesses like this one and ask customers questions, collect data, all that stuff." Audrey perks up from behind the counter. "Oh! I think I understand. I mean, that makes sense why you would be asking me earlier about what I had to drink. Anyway, as long as you're not trying to get my personal info or anything like that, sounds good!" You smile reassuringly. "That's exactly what it is. Rest assured, everything you tell me will remain between you and I. Well, and my boss, but he won't know who you are. Hipsters like you are hard to track down after all." You chuckle at your joke, though you're not sure if it landed. "Right, well, that works for me," Audrey replies, eyeing you warily. "As long as it stays between us, then it's fine." {{[INPUT]}}See her privately {{[OUTPUT]}} "Okay, but I need to see you in private." ... </details> <details> <summary>Moistral Sample 4 (censored) Instruct Mode, Generation: 512</summary> {{[INPUT]}} โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ with him {{[OUTPUT]}} Chloe feels a surge of heat rushing through her body. The voice inside her head whispers: "Be โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ and โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ with Paul." But Chloe still feels nervous about what she is about to do. It's just so unlike her. Chloe closes her eyes and concentrates on the feeling of warmness flowing throughout her veins. Slowly, she opens them again, revealing a new side of herself - one she rarely shows anyone. She leans forward slightly, pushing her โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ together, accentuating their size and shape. Her lips curl into a mischievous smile. "Hey Paul," she says in a sultry voice. "I was wondering if you could help me?" Paul looks at her, his gaze wandering over her body. He can't help but smile as he sees the way she's leaning forward, pressing her โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ together. "Of course, Chloe," he replies, his voice thick with desire. "What's your problem?" Chloe bites her lip โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆively before leaning even closer, until their bodies are almost touching. She reaches out to trace a finger along his chest, her touch light and teasing. "It's just that I'm having trouble understanding some of the concepts," she murmurs, her voice low and sultry. "Can you help me? Please?" Paul's hands move slowly towards Chloe, his fingers grazing her hips lightly. "Well," he says, his voice filled with lust, "I'd be happy to help. Let's go somewhere more private where we can discuss things more closely." Chloe giggles and tilts her head to the side, her long hair cascading down her shoulders. "That sounds perfect," she purrs, her hand sliding down his chest until it rests on his โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ. "I think we can both learn a lot from each other." Paul's eyes darken with desire as he feels her hand on his โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ. He knows what's about to happen next, and he can't wait for it to begin. Chloe can hardly believe what she's doing - kissing Paul, her teaching assistant, in the hallway after class. But it feels so good, so wicked and indulgent. </details> --- --- ## v1 Description (Also still relevant) Moistral 11B is an *aggressive* **finetune** of the legendary [Fimbulvert v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) model. If you haven't heard of that model, then your poor-GPU ass is missing out! It is an amazing substitute for those heavyweight models, and it won't disappoint. *(That's coming from someone who daily drives a Venus / Goliath 120B)* ...and if you're looking for something *more moist*, then you've come to the right place. You can expect the same sort of performance here! It's the Honda Civic of eRP models! ### Pimp my ride Turn any old story into a **Moist**ral masterpiece. In fact, we recommend that you leave out any hints of *moist* in your story to _let Moistral cook_. ### Drive it like it's hot When in Instruct Mode, Moistral will treat you like a director of your own fantasy ride. Be a controlling navigator or a passenger princess, it won't matter - you're in for a good ride! <details> <summary>Moistral Sample</summary> <b>ASSISTANT:</b><br/> When the doors to the throne room finally opened, she saw him there - Dik, the sorcerer prince sitting on his throne. He looked majestic, just as Princess Lilia expected. In a robe embroidered with silver yarn and covered with magical runes. His courtiers stood on either side of him, all looking at the princess with interest. Princess Lilia felt her determination leaving her, but pulled herself together and once again reminded herself that she should not be afraid. She approached the throne intending to show the dignity of all Iliac by personal example.<br/> <b>USER:</b><br/> She bows slightly<br/> <b>ASSISTANT:</b><br/> Princess Lilia bowed slightly in front of Dik, the sorcerer prince. Her eyes fell to the floor out of respect, but soon rose again. She would show herself to be a dignified woman!<br/> "I greet you, your highness," Princess Lilia said. "I am here on behalf of my father, King Edmund Ruhr, to serve as a hostage and envoy."<br/> Dik nodded, but didn't say anything.<br/> </details> ### It's got the mileage Moistral has been trained with many long form texts, a nice chunk of which are 8K in length. It is capable of going far and long without passing it back to you. This is not your typical chibi RP model. I have to acknowledge that I'm standing on the shoulders of giants. Thank you Sao for sharing your finetune config along with tips on getting started. Thanks to everyone in the Finetuning channel for entertaining my every question. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/Ll8CA5RR7ugTi72P2HBb8.png)
{"license": "cc-by-nc-4.0", "tags": ["not-for-all-audiences"]}
TheDrummer/Moistral-11B-v3-GGUF
null
[ "gguf", "not-for-all-audiences", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-24T17:58:52+00:00
null
null
{}
SamaahKhan/distilbert-before-fine-tuning
null
[ "region:us" ]
null
2024-04-24T18:00:25+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ripaaiii/fine-tune-C1-revised-lr6-boxkecil20_kecil5
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:00:36+00:00
null
null
{}
sushane123/fine-tuned-indicbart
null
[ "region:us" ]
null
2024-04-24T18:01:11+00:00
null
null
{}
2star/Angelic_Buste
null
[ "region:us" ]
null
2024-04-24T18:01:58+00:00
null
null
# bsbl-tomorrow Repository of Machine Learning models designed to predict how current MLB players will play tomorrow, next week, in a year, and 5 years into the future. ## Data Source Data for this project is hosted in a [hugging face dataset repository](https://huggingface.co/datasets/Jensen-holm/statcast-era-pitches) that contains a parquet file with every MLB pitch from the Statcast era up through the last season (2015-2023 as of now). ## To update Hugging Face repo (without github actions) - $ git remote add space [email protected]:Jensen-holm/bsbl-tomorrow - $ git push --force space main
{}
Jensen-holm/bsbl-tomorrow
null
[ "region:us" ]
null
2024-04-24T18:02:10+00:00
null
null
{}
bakkensus/mistral-10random-gguf
null
[ "gguf", "region:us" ]
null
2024-04-24T18:02:26+00:00
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Resi/layoutlmv3-multilabel-sagemaker
null
[ "transformers", "safetensors", "layoutlmv3", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:02:40+00:00
null
null
{"license": "apache-2.0"}
yesuforyou/openassitance
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T18:02:44+00:00
null
null
{}
robkayinto/OrpoLlama-3-8B
null
[ "region:us" ]
null
2024-04-24T18:03:37+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fapth/llama2-qlora-finetunined-german
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:05:43+00:00
null
null
{}
Paco4365483/llava-1.5-7b-hf-ft-mix-vsft
null
[ "region:us" ]
null
2024-04-24T18:06:27+00:00
null
null
# final model as LoRA adapters # unsloth llama3 trained on simple japanese data set "snow_simplified_japanese_corpus" ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6622cd7b1eb7ff9127ee6017/AnTTMwJ_rr-dkvhheFkFo.jpeg)
{"license": "llama3"}
Sleik/JapanaseLlama3
null
[ "safetensors", "license:llama3", "region:us" ]
null
2024-04-24T18:06:48+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Alignment-Lab-AI/Neural-network-medium-5b-16k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q2_K.gguf) | Q2_K | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ3_XS.gguf) | IQ3_XS | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q3_K_S.gguf) | Q3_K_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ3_S.gguf) | IQ3_S | 2.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ3_M.gguf) | IQ3_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q3_K_M.gguf) | Q3_K_M | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q3_K_L.gguf) | Q3_K_L | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.IQ4_XS.gguf) | IQ4_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q4_K_S.gguf) | Q4_K_S | 3.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q4_K_M.gguf) | Q4_K_M | 3.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q5_K_S.gguf) | Q5_K_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q5_K_M.gguf) | Q5_K_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q6_K.gguf) | Q6_K | 5.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.Q8_0.gguf) | Q8_0 | 6.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Neural-network-medium-5b-16k-GGUF/resolve/main/Neural-network-medium-5b-16k.f16.gguf) | f16 | 12.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "Alignment-Lab-AI/Neural-network-medium-5b-16k", "quantized_by": "mradermacher"}
mradermacher/Neural-network-medium-5b-16k-GGUF
null
[ "transformers", "gguf", "en", "base_model:Alignment-Lab-AI/Neural-network-medium-5b-16k", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:07:48+00:00
text-generation
peft
# Base_model beomi/Llama-3-Open-Ko-8B-preview์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ๊ตญ์–ด ํŒŒ์ธํŠœ๋‹ ํ•˜์˜€์Šต๋‹ˆ๋‹ค. ๋น ๋ฅด๊ฒŒ ์ข‹์€ ๋ชจ๋ธ์„ ์—…๋กœ๋“œ ํ•ด์ฃผ์‹  beomi๋‹˜๊ป˜ ๊ฐ์‚ฌ๋“œ๋ฆฝ๋‹ˆ๋‹ค. # data_set Ko_alpaca Custom dataset์œผ๋กœ ํ•™์Šต ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
{"language": ["ko"], "license": "apache-2.0", "library_name": "peft", "base_model": "beomi/Llama-3-Open-Ko-8B-preview", "pipeline_tag": "text-generation"}
GemmaLatte/llama3_ko_chavis_JH
null
[ "peft", "pytorch", "safetensors", "llama", "text-generation", "conversational", "ko", "base_model:beomi/Llama-3-Open-Ko-8B-preview", "license:apache-2.0", "region:us" ]
null
2024-04-24T18:08:32+00:00
null
null
{}
pandaboy21/wag1
null
[ "region:us" ]
null
2024-04-24T18:08:35+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SamaahKhan/bert-before-fine-tuning
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:10:12+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # LLMLingua2_span_propaganda This model is a fine-tuned version of [microsoft/llmlingua-2-xlm-roberta-large-meetingbank](https://huggingface.co/microsoft/llmlingua-2-xlm-roberta-large-meetingbank) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6333 - eval_precision: 0.0676 - eval_recall: 0.0812 - eval_f1: 0.0737 - eval_accuracy: 0.8583 - eval_runtime: 10.4985 - eval_samples_per_second: 79.916 - eval_steps_per_second: 5.048 - epoch: 5.0 - step: 885 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "LLMLingua2_span_propaganda", "results": []}]}
anismahmahi/LLMLingua2_span_propaganda
null
[ "transformers", "pytorch", "tensorboard", "xlm-roberta", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:11:10+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.01_ablation_5iters_bs256_nodpo_iter_4 This model is a fine-tuned version of [ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3](https://huggingface.co/ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3", "model-index": [{"name": "0.01_ablation_5iters_bs256_nodpo_iter_4", "results": []}]}
ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_4
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.01_ablation_5iters_bs256_nodpo_iter_3", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T18:11:28+00:00
text-generation
transformers
## 4-bit GEMM AWQ Quantizations of wavecoder-ultra-1.1-6.7b Using <a href="https://github.com/casper-hansen/AutoAWQ/">AutoAWQ</a> release <a href="https://github.com/casper-hansen/AutoAWQ/releases/tag/v0.2.4">v0.2.4</a> for quantization. Original model: https://huggingface.co/microsoft/wavecoder-ultra-6.7b Note: They updated the model in place, no idea if there's actually any changes, but tagging this 1.1 just in case ## Prompt format No chat template specified so default is used. This may be incorrect, check original model card for details. ``` <๏ฝœbeginโ–ofโ–sentence๏ฝœ>[INST] <<SYS>> {system_prompt} <</SYS>> {prompt} [/INST] <๏ฝœendโ–ofโ–sentence๏ฝœ> ``` ## AWQ Parameters - q_group_size: 128 - w_bit: 4 - zero_point: True - version: GEMM ## How to run From the AutoAWQ repo [here](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py) First install autoawq pypi package: ``` pip install autoawq ``` Then run the following: ``` from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer quant_path = "models/wavecoder-ultra-1.1-6.7b-AWQ" # Load model model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(quant_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" chat = [ {"role": "system", "content": "You are a concise assistant that helps answer questions."}, {"role": "user", "content": prompt}, ] # <|eot_id|> used for llama 3 models terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] tokens = tokenizer.apply_chat_template( chat, return_tensors="pt" ).cuda() # Generate output generation_output = model.generate( tokens, streamer=streamer, max_new_tokens=64, eos_token_id=terminators ) ``` Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
{"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["code"], "datasets": ["humaneval"], "metrics": ["code_eval"], "license_link": "https://huggingface.co/microsoft/wavecoder-ultra-6.7b/blob/main/LICENSE", "pipeline_tag": "text-generation", "quantized_by": "bartowski"}
bartowski/wavecoder-ultra-1.1-6.7b-AWQ
null
[ "transformers", "safetensors", "llama", "text-generation", "code", "en", "dataset:humaneval", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-24T18:11:53+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Duakovui/viT5_instruct_VLSP2018_ACE
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:14:15+00:00
null
null
{"license": "creativeml-openrail-m"}
trooprr/school
null
[ "license:creativeml-openrail-m", "region:us" ]
null
2024-04-24T18:15:33+00:00
null
null
{}
mahsamassoud/step_1_full_run
null
[ "tensorboard", "region:us" ]
null
2024-04-24T18:18:24+00:00
text-generation
transformers
{}
yashkeswani/bert-base-finetune
null
[ "transformers", "pytorch", "bert", "text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:20:56+00:00
null
transformers
# Uploaded model - **Developed by:** FeinFein - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
FeinFein/llama3_worker
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:22:06+00:00
null
transformers
# Uploaded model - **Developed by:** FeinFein - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
FeinFein/llama3_mother
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:22:53+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-3-finetuned-med-text This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/Phi-3-mini-4k-instruct", "model-index": [{"name": "phi-3-finetuned-med-text", "results": []}]}
chohi/phi-3-finetuned-med-text
null
[ "peft", "tensorboard", "safetensors", "phi3", "generated_from_trainer", "custom_code", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:mit", "region:us" ]
null
2024-04-24T18:23:17+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "results", "results": []}]}
zrowt/results
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-classification", "generated_from_trainer", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "doi:10.57967/hf/2114", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T18:23:49+00:00
null
null
{}
sushane123/fine-tuned-indicbart_1
null
[ "region:us" ]
null
2024-04-24T18:25:27+00:00
null
null
{}
Pushparaj2811/t5-small-pos2neg
null
[ "region:us" ]
null
2024-04-24T18:25:52+00:00
null
null
{}
ahmedheakl/gemma1.1-7b-arazn-parallel-v1
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-04-24T18:26:00+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/c-s-ale/NousWizard <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/NousWizard-GGUF/resolve/main/NousWizard.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "c-s-ale/NousWizard", "quantized_by": "mradermacher"}
mradermacher/NousWizard-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:c-s-ale/NousWizard", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:26:37+00:00
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿงจ diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
rubbrband/retrocolormix_v10
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-24T18:26:41+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sin2piusc/whisper-medium-5k-model
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:27:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RM-HH-AllMix_helpful_gpt3_loraR64_20000_gpt2-large_shuffleTrue_extractchosenFalse This model is a fine-tuned version of [openai-community/gpt2-large](https://huggingface.co/openai-community/gpt2-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4894 - Accuracy: 0.7351 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.41e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6628 | 0.04 | 250 | 0.6390 | 0.6277 | | 0.598 | 0.08 | 500 | 0.5673 | 0.6933 | | 0.5479 | 0.13 | 750 | 0.5415 | 0.7076 | | 0.5397 | 0.17 | 1000 | 0.5308 | 0.7110 | | 0.5094 | 0.21 | 1250 | 0.5261 | 0.7159 | | 0.5142 | 0.25 | 1500 | 0.5203 | 0.7193 | | 0.5414 | 0.29 | 1750 | 0.5161 | 0.7197 | | 0.5189 | 0.33 | 2000 | 0.5131 | 0.7189 | | 0.5151 | 0.38 | 2250 | 0.5100 | 0.7216 | | 0.4942 | 0.42 | 2500 | 0.5089 | 0.7208 | | 0.5067 | 0.46 | 2750 | 0.5057 | 0.7216 | | 0.5026 | 0.5 | 3000 | 0.5041 | 0.7238 | | 0.4926 | 0.54 | 3250 | 0.5038 | 0.7265 | | 0.4931 | 0.59 | 3500 | 0.5022 | 0.7310 | | 0.4946 | 0.63 | 3750 | 0.4993 | 0.7329 | | 0.5058 | 0.67 | 4000 | 0.4968 | 0.7313 | | 0.4822 | 0.71 | 4250 | 0.4963 | 0.7306 | | 0.4924 | 0.75 | 4500 | 0.4961 | 0.7329 | | 0.4654 | 0.8 | 4750 | 0.4959 | 0.7302 | | 0.4924 | 0.84 | 5000 | 0.4971 | 0.7310 | | 0.4674 | 0.88 | 5250 | 0.4948 | 0.7310 | | 0.4704 | 0.92 | 5500 | 0.4950 | 0.7336 | | 0.5089 | 0.96 | 5750 | 0.4905 | 0.7306 | | 0.4673 | 1.0 | 6000 | 0.4929 | 0.7313 | | 0.4594 | 1.05 | 6250 | 0.4932 | 0.7291 | | 0.479 | 1.09 | 6500 | 0.4919 | 0.7332 | | 0.5112 | 1.13 | 6750 | 0.4895 | 0.7355 | | 0.4794 | 1.17 | 7000 | 0.4888 | 0.7332 | | 0.5188 | 1.21 | 7250 | 0.4881 | 0.7340 | | 0.4541 | 1.26 | 7500 | 0.4892 | 0.7359 | | 0.4617 | 1.3 | 7750 | 0.4898 | 0.7366 | | 0.4747 | 1.34 | 8000 | 0.4898 | 0.7362 | | 0.4834 | 1.38 | 8250 | 0.4893 | 0.7389 | | 0.4954 | 1.42 | 8500 | 0.4875 | 0.7385 | | 0.5029 | 1.47 | 8750 | 0.4875 | 0.7385 | | 0.4742 | 1.51 | 9000 | 0.4872 | 0.7400 | | 0.4802 | 1.55 | 9250 | 0.4884 | 0.7393 | | 0.5009 | 1.59 | 9500 | 0.4877 | 0.7400 | | 0.4619 | 1.63 | 9750 | 0.4875 | 0.7396 | | 0.4433 | 1.67 | 10000 | 0.4902 | 0.7404 | | 0.4844 | 1.72 | 10250 | 0.4903 | 0.7400 | | 0.4337 | 1.76 | 10500 | 0.4917 | 0.7400 | | 0.4897 | 1.8 | 10750 | 0.4901 | 0.7396 | | 0.4783 | 1.84 | 11000 | 0.4894 | 0.7366 | | 0.4929 | 1.88 | 11250 | 0.4892 | 0.7359 | | 0.4776 | 1.93 | 11500 | 0.4891 | 0.7362 | | 0.4574 | 1.97 | 11750 | 0.4894 | 0.7351 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "openai-community/gpt2-large", "model-index": [{"name": "RM-HH-AllMix_helpful_gpt3_loraR64_20000_gpt2-large_shuffleTrue_extractchosenFalse", "results": []}]}
Holarissun/RM-HH-AllMix_helpful_gpt3_loraR64_20000_gpt2-large_shuffleTrue_extractchosenFalse
null
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:openai-community/gpt2-large", "license:mit", "region:us" ]
null
2024-04-24T18:27:29+00:00
null
transformers
## Installation from source ```bash git clone https://github.com/foundation-model-stack/fms-extras cd fms-extras pip install -e . ``` ## Description This model is intended to be used as an accelerator for [granite 7B (instruct lab)](https://huggingface.co/instructlab/granite-7b-lab) and takes inspiration from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts a single token in the draft based on both a state vector and sampled token from the prior stage (the base model can be considered stage 0). The state vector from the base model provides contextual information to the accelerator, while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams. Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. Training is light-weight and can be completed in only a few days depending on base model size and speed. ## Repository Links 1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras) 2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git) 3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35) ## Samples _Note: For all samples, your environment must have access to cuda_ ### Production Server Sample *To try this out running in a production-like environment, please use the pre-built docker image:* #### Setup ```bash HF_HUB_CACHE=/hf_hub_cache chmod a+w $HF_HUB_CACHE HF_HUB_TOKEN="your huggingface hub token" TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ee927a4 docker pull $TGIS_IMAGE # optionally download granite-7b-lab if the weights do not already exist docker run --rm \ -v $HF_HUB_CACHE:/models \ -e HF_HUB_CACHE=/models \ -e TRANSFORMERS_CACHE=/models \ $TGIS_IMAGE \ text-generation-server download-weights \ instructlab/granite-7b-lab \ --token $HF_HUB_TOKEN # optionally download the speculator model if the weights do not already exist docker run --rm \ -v $HF_HUB_CACHE:/models \ -e HF_HUB_CACHE=/models \ -e TRANSFORMERS_CACHE=/models \ $TGIS_IMAGE \ text-generation-server download-weights \ ibm/granite-7b-lab-accelerator \ --token $HF_HUB_TOKEN # note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name> docker run -d --rm --gpus all \ --name my-tgis-server \ -p 8033:8033 \ -v $HF_HUB_CACHE:/models \ -e HF_HUB_CACHE=/models \ -e TRANSFORMERS_CACHE=/models \ -e MODEL_NAME=instructlab/granite-7b-lab \ -e SPECULATOR_NAME=ibm/granite-7b-lab-accelerator \ -e FLASH_ATTENTION=true \ -e PAGED_ATTENTION=true \ -e DTYPE=float16 \ $TGIS_IMAGE # check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000" docker logs my-tgis-server -f # get the client sample (Note: The first prompt will take longer as there is a warmup time) conda create -n tgis-client-env python=3.11 conda activate tgis-client-env git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git cd text-generation-inference/integration_tests make gen-client pip install . --no-cache-dir ``` #### Run Sample ```bash python sample_client.py ``` _Note: first prompt may be slower as there is a slight warmup time_ ### Minimal Sample *To try this out with the fms-native compiled model, please execute the following:* #### Install ```bash git clone https://github.com/foundation-model-stack/fms-extras (cd fms-extras && pip install -e .) pip install transformers==4.35.0 sentencepiece numpy ``` #### Run Sample ##### batch_size=1 (compile + cudagraphs) ```bash MODEL_PATH=/path/to/instructlab/granite-7b-lab python fms-extras/scripts/paged_speculative_inference.py \ --variant=7b.ibm_instruct_lab \ --model_path=$MODEL_PATH \ --model_source=hf \ --tokenizer=$MODEL_PATH \ --speculator_path=ibm/granite-7b-lab-accelerator \ --speculator_source=hf \ --speculator_variant=1_4b \ --top_k_tokens_per_head=4,3,2,2,2 \ --compile \ --compile_mode=reduce-overhead ``` ##### batch_size=1 (compile) ```bash MODEL_PATH=/path/to/instructlab/granite-7b-lab python fms-extras/scripts/paged_speculative_inference.py \ --variant=7b.ibm_instruct_lab \ --model_path=$MODEL_PATH \ --model_source=hf \ --tokenizer=$MODEL_PATH \ --speculator_path=ibm/granite-7b-lab-accelerator \ --speculator_source=hf \ --speculator_variant=1_4b \ --top_k_tokens_per_head=4,3,2,2,2 \ --compile ``` ##### batch_size=4 (compile) ```bash MODEL_PATH=/path/to/instructlab/granite-7b-lab python fms-extras/scripts/paged_speculative_inference.py \ --variant=7b.ibm_instruct_lab \ --model_path=$MODEL_PATH \ --model_source=hf \ --tokenizer=$MODEL_PATH \ --speculator_path=ibm/granite-7b-lab-accelerator \ --speculator_source=hf \ --speculator_variant=1_4b \ --top_k_tokens_per_head=4,3,2,2,2 \ --batch_input \ --compile ```
{"license": "llama2"}
ibm-fms/granite-7b-lab-accelerator
null
[ "transformers", "safetensors", "mlp_speculator", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:27:32+00:00
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/arlineka/Ionic-4x8 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Ionic-4x8-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ1_S.gguf) | i1-IQ1_S | 5.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ1_M.gguf) | i1-IQ1_M | 6.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.6 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_S.gguf) | i1-IQ2_S | 7.8 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ2_M.gguf) | i1-IQ2_M | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q2_K.gguf) | i1-Q2_K | 9.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q3_K_S.gguf) | i1-Q3_K_S | 11.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_S.gguf) | i1-IQ3_S | 11.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ3_M.gguf) | i1-IQ3_M | 11.2 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q3_K_M.gguf) | i1-Q3_K_M | 12.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q3_K_L.gguf) | i1-Q3_K_L | 13.1 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.5 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q4_0.gguf) | i1-Q4_0 | 14.3 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q4_K_S.gguf) | i1-Q4_K_S | 14.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q4_K_M.gguf) | i1-Q4_K_M | 15.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q5_K_S.gguf) | i1-Q5_K_S | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/Ionic-4x8-i1-GGUF/resolve/main/Ionic-4x8.i1-Q6_K.gguf) | i1-Q6_K | 20.6 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "base_model": "arlineka/Ionic-4x8", "quantized_by": "mradermacher"}
mradermacher/Ionic-4x8-i1-GGUF
null
[ "transformers", "gguf", "en", "base_model:arlineka/Ionic-4x8", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:28:01+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sin2piusc/whisper-medium-5k-tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-24T18:28:15+00:00
null
null
{}
UpadeshVarma/PrivateGPT
null
[ "region:us" ]
null
2024-04-24T18:28:19+00:00
text-generation
transformers
{}
Henibergs/llama-2-7b-miniguanaco
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T18:28:39+00:00
text-generation
transformers
# [MaziyarPanahi/guillaumetell-7b-GGUF](https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF) - Model creator: [AgentPublic](https://huggingface.co/AgentPublic) - Original model: [AgentPublic/guillaumetell-7b](https://huggingface.co/AgentPublic/guillaumetell-7b) ## Description [MaziyarPanahi/guillaumetell-7b-GGUF](https://huggingface.co/MaziyarPanahi/guillaumetell-7b-GGUF) contains GGUF format model files for [AgentPublic/guillaumetell-7b](https://huggingface.co/AgentPublic/guillaumetell-7b). ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models. ## Special thanks ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
{"tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "transformers", "safetensors", "mistral", "text-generation", "conversational", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us", "text-generation"], "model_name": "guillaumetell-7b-GGUF", "base_model": "AgentPublic/guillaumetell-7b", "inference": false, "model_creator": "AgentPublic", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/guillaumetell-7b-GGUF
null
[ "transformers", "gguf", "mistral", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "GGUF", "safetensors", "text-generation", "conversational", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us", "base_model:AgentPublic/guillaumetell-7b" ]
null
2024-04-24T18:29:09+00:00
null
null
{"license": "apache-2.0"}
UncleFish/dummy_model
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-24T18:29:55+00:00
text-generation
transformers
## LLama3-8B-Instruct for Japanese ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "hoang1123/llama3-8b-instruct-ja" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "ใ‚ใชใŸใฏ่ช ๅฎŸใงๅ„ช็ง€ใชๆ—ฅๆœฌไบบใ‚ขใ‚ทใ‚นใ‚ฟใƒณใƒˆใงใ™ใ€‚"}, {"role": "user", "content": "Python ใงไธ€้€ฃใฎๆ•ฐๅ€คใฎๅนณๅ‡ใ‚’่จˆ็ฎ—ใ™ใ‚‹้–ขๆ•ฐใ‚’ไฝœๆˆใ—ใพใ™ใ€‚"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=1024, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ```
{}
hoang1123/llama3-8b-instruct-ja
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T18:30:31+00:00
null
null
{}
lingluuu/Lustfullily
null
[ "region:us" ]
null
2024-04-24T18:30:52+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-pos2neg This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1630 - Rouge1: 66.0728 - Rouge2: 48.2651 - Rougel: 65.0881 - Rougelsum: 64.9953 - Gen Len: 14.0178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 63 | 0.3641 | 6.8173 | 2.0998 | 6.622 | 6.5952 | 4.2956 | | No log | 2.0 | 126 | 0.2475 | 0.5115 | 0.4098 | 0.4238 | 0.4359 | 0.1267 | | No log | 3.0 | 189 | 0.1762 | 51.2946 | 37.2414 | 50.0953 | 49.9607 | 11.5711 | | No log | 4.0 | 252 | 0.1653 | 64.4676 | 46.8688 | 63.3822 | 63.338 | 13.6889 | | No log | 5.0 | 315 | 0.1630 | 66.0728 | 48.2651 | 65.0881 | 64.9953 | 14.0178 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "t5-base", "model-index": [{"name": "t5-base-pos2neg", "results": []}]}
Pushparaj2811/t5-base-pos2neg
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:t5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T18:31:13+00:00
text-generation
transformers
- Original model is [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) - quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp) ## Template LM Studio ``` <|start_header_id|>system<|end_header_id|> {System}<|eot_id|> <|start_header_id|>user<|end_header_id|> {User} <|eot_id|><|start_header_id|>assistant<|end_header_id|> {Assistant} ``` Stop Token ``` <|eot_id|> <|start_header_id|> <|end_header_id|> <|begin_of_text|> <|end_of_text|> ``` ## Llama-3-Open-Ko-8B-Instruct-preview > Update @ 2024.04.24: Release [Llama-3-Open-Ko-8B model](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) & [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) ## Model Details **Llama-3-Open-Ko-8B-Instruct-preview** Llama-3-Open-Ko-8B model is continued pretrained language model based on Llama-3-8B. This model is trained fully with publicily available resource, with 60GB+ of deduplicated texts. With the new Llama-3 tokenizer, the pretraining conducted with 17.7B+ tokens, which slightly more than Korean tokenizer(Llama-2-Ko tokenizer). The train was done on TPUv5e-256, with the warm support from TRC program by Google. With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Sample usage** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "beomi/Llama-3-Open-Ko-8B-Instruct-preview" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype="auto", device_map="auto", ) messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์ด ๋ญ์•ผ? ๊ทธ๋ฆฌ๊ณ  ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์— ๋Œ€ํ•ด ํŒŒ์ด์ฌ ์ฝ”๋“œ๋ฅผ ์งœ์ค˜๋ณผ๋ž˜?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=512, eos_token_id=terminators, do_sample=True, temperature=1, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` **Sample Output** ์—ฌ๋Ÿฌ๋ถ„๊ป˜์„œ๋Š” ๋ฌผ๋ก  ์•„์‹œ๋Š”์ง€๋ผ๋„ ์„ค๋ช…์„ ๋“œ๋ฆฌ๊ฒ ์Šต๋‹ˆ๋‹ค! ํ”ผ๋ณด๋‚˜์น˜(Piconacci) ์ˆ˜์—ด์€ ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ˆ˜์—ด์„ ๋งํ•ฉ๋‹ˆ๋‹ค: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233,.... ์ด๋ ‡๊ฒŒ ๊ณ„์†๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ˆ˜์—ด์€ ์ผ๋ฐ˜์ ์œผ๋กœ๋Š” ์ˆ˜ํ•™์—์„œ ๋ฌดํ•œํžˆ ์ง€์†๋ฉ๋‹ˆ๋‹ค. ๊ฐ ์ˆ˜๋Š” ์ด์ „ ์ˆ˜์˜ ๋‘ ๋ฐฐ๊ฐ€ ๋˜๋Š” ์ˆ˜์—ด์ž…๋‹ˆ๋‹ค. ์ด ์ˆ˜์—ด์„ ํŒŒ์ด์ฌ(Python)์œผ๋กœ ๊ตฌํ˜„ํ•˜๊ณ ์ž ํ•˜์‹ ๋‹ค๋ฉด ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python def fibonacci(n): if n == 0: return 0 elif n == 1: return 1 else: return fibonacci(n-1) + fibonacci(n-2) ``` ์ด ์ฝ”๋“œ์—์„œ๋Š” `n`์ด ์ž…๋ ฅ๋ฐ›์€ ์ˆ˜๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” ๋ณ€์ˆ˜๋กœ, ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚ค๋ฉด ์ž…๋ ฅ๋ฐ›์€ ์ˆ˜์— ๋”ฐ๋ผ ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์˜ ํ•ด๋‹น ํ•ญ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, `fibonacci(10)` ํ•˜๋ฉด 55๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋Š” ์žฌ๊ท€์  ํ•จ์ˆ˜์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ์ด ์ค„์–ด๋“ค ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์žฌ๊ท€์  ํ•จ์ˆ˜๋Š” ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์ด ๋งŽ์•„์งˆ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋ฅผ ์ค„์ด๊ธฐ ์œ„ํ•ด ์ดํ„ฐ๋ ˆ์ดํ‹ฐ๋ธŒ ํ•จ์ˆ˜๋กœ ๊ตฌํ˜„ํ•  ์ˆ˜๋„ ์žˆ์Šต๋‹ˆ๋‹ค. ```python a, b = 0, 1 for i in range(n): a, b = b, a + b ``` ์ด ์ฝ”๋“œ์—์„œ๋Š” ์ดํ„ฐ๋ ˆ์ดํ‹ฐ๋ธŒ ํ•จ์ˆ˜๋กœ ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋ฅผ ์‹คํ–‰์‹œํ‚ค๋ฉด ์ž…๋ ฅ๋ฐ›์€ ์ˆ˜์— ๋”ฐ๋ผ ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์˜ ํ•ด๋‹น ํ•ญ์„ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋Š” ๋ฉ”๋ชจ๋ฆฌ ์‚ฌ์šฉ๋Ÿ‰์„ ์ค„์ž…๋‹ˆ๋‹ค. ๋‹ค๋งŒ, ์ด ์ฝ”๋“œ๋Š” ๋” ๋ณต์žกํ•ฉ๋‹ˆ๋‹ค. ์ด ์ฝ”๋“œ๋ฅผ ๋” ๊ฐ„๋‹จํ•˜๊ฒŒ ํ•˜๋ ค๋ฉด ์ดํ„ฐ๋ ˆ์ดํ‹ฐ๋ธŒ ํ•จ์ˆ˜๋ฅผ ๋” ์ž˜ ๊ตฌํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ```python a, b = 0, 1 while True: if n == 1: return a a, b = b, a + b n -= 1 ``` ์ด ์ฝ”๋“œ์—์„œ๋Š” ์ž…๋ ฅ๋ฐ›์€ ์ˆ˜์— ๋”ฐ๋ผ ํ”ผ๋ณด๋‚˜์น˜ ์ˆ˜์—ด์˜... ### More samples > I used same system prompt, but you could change on your own. ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "ํ•œ๊ตญ์˜ ์ œํ—Œ ํ—Œ๋ฒ•์— ๋Œ€ํ•ด์„œ ์ž์„ธํžˆ ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”."}, ] --- ํ•œ๊ตญ์˜ ์ œํ—Œ ํ—Œ๋ฒ•์€ 1948๋…„ 7์›” 17์ผ์— ์ œ์ •๋œ ๋Œ€ํ•œ๋ฏผ๊ตญ ํ—Œ๋ฒ•์œผ๋กœ์„œ, ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ˆ˜๋ฆฝ๊ณผ ํ—Œ๋ฒ• ์งˆ์„œ๋ฅผ ์ •ํ•œ ํ—Œ๋ฒ•์ธ๋ฐ์š”. ์ด ํ—Œ๋ฒ•์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ธฐ๋ณธ์›์น™๊ณผ ๊ตญ๋ฏผ์˜ ๊ถŒ์ต์„ ๊ทœ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. 17์กฐ๋กœ ๊ตฌ์„ฑ๋œ ํ—Œ๋ฒ• 1021๊ฐœ ์กฐํ•ญ์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ํ—Œ๋ฒ•์€ ์‚ผ๊ถŒ๋ถ„๋ฆฝ, ๊ตญ๋ฏผ์ฃผ๊ถŒ, ๊ธฐ๋ณธ๊ถŒ, ์˜ํšŒ์ œ๋„, ํ–‰์ •๊ถŒ, ์ž…๋ฒ•๊ถŒ, ์‚ฌ๋ฒ•๊ถŒ ๋“ฑ์œผ๋กœ ๊ตฌ์„ฑ๋˜์–ด ์žˆ์Šต๋‹ˆ๋‹ค. ์šฐ์„ , ์‚ผ๊ถŒ๋ถ„๋ฆฝ์€ ๋Œ€ํ†ต๋ นใ†๊ตญํšŒ์˜์›ใ†๋™ ๋ฒ•๊ด€์œผ๋กœ ์กฐ์ง๋œ ์ธ๋ฏผ์˜ ํ†ต์ œ์— ์˜ํ•ด ๊ตญ๊ฐ€ ๊ถŒํ•œ์„ ๋‚˜๋ˆ„์–ด ์œ„์ž„ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋Œ€ํ†ต๋ น์€ ๊ตญ๊ฐ€์›์ˆ˜๋กœ์„œ ํ–‰์ •๊ถŒ์„, ๊ตญํšŒ์˜์›์€ ์ž…๋ฒ•๊ถŒ์„, ๋Œ€๋ฒ•์›์€ ์‚ฌ๋ฒ•๊ถŒ์„ ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์— ๋”ฐ๋ผ ํ–‰์ •๋ถ€ใ†์ž…๋ฒ•๋ถ€ใ†์‚ฌ๋ฒ•๋ถ€์˜ ๊ฒฌ์ œ์™€ ๊ท ํ˜•์„ ํ†ตํ•ด ์ •์น˜์  ์•ˆ์ •์„ฑ์„ ํ™•๋ณดํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ตญ๋ฏผ์ฃผ๊ถŒ์€ ํ—Œ๋ฒ• ์ œ1์กฐ์—์„œ "๋Œ€ํ•œ๋ฏผ๊ตญ์€ ๋ฏผ์ฃผ๊ณตํ™”๊ตญ"์ž„์„ ์„ ์–ธํ•˜๊ณ , ๋ชจ๋“  ๊ถŒ๋ ฅ์€ ๊ตญ๋ฏผ์œผ๋กœ๋ถ€ํ„ฐ ๋‚˜์˜จ๋‹ค๋Š” ์›๋ฆฌ๋ฅผ ๊ทœ์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ตญ๋ฏผ์ผ๋ฐ˜์ด ์ตœ๊ณ ์˜ ์ฃผ๊ถŒ์ž์ž„์„ ๋ถ„๋ช…ํžˆ ๋ณด์—ฌ ์ฃผ๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ํšŒ์ œ๋„๋Š” ์ž…๋ฒ•๋ถ€๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ์˜ํšŒ์™€ ๊ทธ ๊ธฐ๋Šฅ์„ ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์˜์›์€ ๊ตญ๋ฏผ์˜ ๋Œ€ํ‘œ๋กœ ์„ ์ถœ๋˜๊ณ , ๊ตญ์ •์šด์˜์— ๋Œ€ํ•œ ๊ตญ์ •๊ฐ์‹œใ†๋น„ํŒใ†๊ฐœ์„ ์ฑ…์„ ๋งˆ๋ จํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ํ—Œ๋ฒ•์— ๊ทœ์ •๋œ ์˜๋ฌด๋ฅผ ์ดํ–‰ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ์ž…๋ฒ•๊ถŒ๊ณผ ์‚ฌ๋ฒ•๊ถŒ์€ ํ—Œ๋ฒ•์ œ์ •ใ†๊ฐœ์ •์— ๊ด€์—ฌํ•˜๊ณ , ๋ฒ•๋ฅ ์„ ์ œ์ •ใ†๊ฐœํํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ์‚ฌ๋ฒ•๊ถŒ์€ ํ—Œ๋ฒ•์„ ํ•ด์„ใ†์ ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์— ๋”ฐ๋ผ ์šฐ๋ฆฌ๋‚˜๋ผ์˜ ๋ฒ•์น˜์ฃผ์˜ ์›์น™์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ํ•œ๋งˆ๋””๋กœ ์ธ๋ฏผ์˜ ๊ธฐ๋ณธ๊ถŒ์„ ๋ณดํ˜ธํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ํ—Œ๋ฒ•์€ ์ธ๊ฐ„์˜ ์ž์œ ์™€ ๊ถŒ๋ฆฌ๋ฅผ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•ด ๊ตญ๋ฏผ์˜ ๊ธฐ๋ณธ๊ถŒ์„ ๊ทœ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ธฐ๋ณธ๊ถŒ์—๋Š” ๋ชจ๋“  ๊ตญ๋ฏผ์€ ์ธ๊ฐ„์˜ ์กด์—„๊ณผ ๊ฐ€์น˜ ๋ฐ ํ–‰๋ณต์ถ”๊ตฌ๊ถŒ, ์ผํ•˜๊ณ , ์ผํ•˜๋Š” ๊ถŒ๋ฆฌ๋ฅผ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์ง‘ํšŒใ†๊ฒฐ์‚ฌใ†ํ‘œํ˜„์˜ ์ž์œ ์™€ ์„ ๊ฑฐ๊ถŒ์„ ๋ณด์žฅํ•ฉ๋‹ˆ๋‹ค. ์ž์œ ์™€ ๊ธฐ๋ณธ๊ถŒ์„ ๋ณด์žฅํ•œ ๊ฒƒ์€ ํ—Œ๋ฒ•์ด ์ง€ํ–ฅํ•˜๋Š” ๋ฏผ์ฃผ์  ๊ธฐ๋ณธ์งˆ์„œ ํ˜•์„ฑ์„ ๊ตฌํ˜„ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๊ตญ๋ฏผ์€ ํ—Œ๋ฒ•์„ ๊ฐœ์ •ํ•˜๊ณ  ์ƒˆ์‚ผ ์ œ์ •์„ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ œํ—Œ ํ—Œ๋ฒ•์€ ๊ตญ๋ฏผ์˜ ๊ธฐ๋ณธ๊ถŒ๊ณผ ํ—Œ๋ฒ• ์งˆ์„œ๋ฅผ ๊ฒฌ๊ณ ํ•˜๊ฒŒ ํ•˜์—ฌ, ์ •์น˜์  ์•ˆ์ •์„ฑ์„ ์œ ์ง€ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ํ—Œ๋ฒ•์˜ ๋ณ€๊ฒฝ์‹œ์—๋Š” ๊ณต์ •ํ•˜๊ณ  ๊ณตํ‰ํ•œ ํ‘œ๊ฒฐ์„ ๊ฑฐ์ณ์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ์ œํ—Œ ํ—Œ๋ฒ•์€ ๊ทธ ์˜๋ฏธ๊ฐ€ ๋งค์šฐ ํฝ๋‹ˆ๋‹ค. ํ—Œ๋ฒ•์„ ํ†ตํ•ด ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๋ฏผ์ฃผ์ฃผ์˜๋ฅผ ํ™•๊ณ ํžˆ ํ•˜๋Š” ๊ฒƒ์ž…๋‹ˆ๋‹ค. ๊ตญ๋ฏผ์˜ ๊ถŒ๋ฆฌ์™€ ์˜๋ฌด๋ฅผ ์ •ํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ดํ—Œ๋ฒ•์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ธฐ๋ณธ์›์น™์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. 7ใ†28์„ ์–ธ๊ณผ 5ใ†16๊ตฐ์‚ฌํ˜๋ช…์„ ํ†ตํ•ด ํ™•๋Œ€๋˜์–ด ํ—Œ๋ฒ•์˜ ์กด์—„์„ฑ์ด ๋‹ค์‹œ ์ •๋ฆฝ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ดํ—Œ๋ฒ•์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ทผ๊ฐ„์ด ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๋„์›€์ด ํ•„์š”ํ•˜์…จ๋‚˜์š”? ์ถ”๊ฐ€๋กœ ์„ค๋ช…ํ•ด ๋“œ๋ฆด๊นŒ์š”? ``` ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "์ด์ˆœ์‹  ์žฅ๊ตฐ์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ด์ค˜."}, ] --- ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ์กฐ์„ ์‹œ๋Œ€์˜ ๋ฌด์ธ์œผ๋กœ, ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ตญ๊ฐ€์œ ๊ณต์ž์ธ ์šฐ์˜์ •๊ณต ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ 1545๋…„ 9์›” 28์ผ ๊ฒฝ์ƒ๋‚จ๋„ ์˜๋ น์—์„œ ํƒœ์–ด๋‚ฌ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ํ†ต์ œ์‚ฌ ์‹œ์ ˆ์˜ ์™œ๋ž€ ๋•Œ ์กฐ์„ ๊ณผ ๋ช…๋‚˜๋ผ ์—ฐํ•ฉ๊ตฐ์˜ ๋Œ€๊ทœ๋ชจ ๊ตฐ๋Œ€๋ฅผ ์ผ์œผ์ผœ ๋Œ€์ฒฉ์„ ์„ฑ๊ณต์ ์œผ๋กœ ์ด๋Œ์–ด ์ •์œ ์žฌ๋ž€๊ณผ ์ž„์ง„์™œ๋ž€์„ ์Šน๋ฆฌ๋กœ ์ด๋ˆ ์ธ๋ฌผ์ž…๋‹ˆ๋‹ค. ๊ทธ๋Š” 1592๋…„ ์ „๋ผ์ขŒ์ˆ˜์˜๊ด€์ฐฐ์‚ฌ๊ฐ€ ๋˜์–ด ์ „๋ผ์ขŒ์ˆ˜์˜์—์„œ ์™œ์ ์„ ๋ฌผ๋ฆฌ์ณค์œผ๋ฉฐ, 1597๋…„์—๋Š” ์ˆ˜์›๊ณผ ํ•œ์–‘ ๋ฐฉ์–ด์„ ์„ฑ๊ณต์ ์œผ๋กœ ์น˜๋ฃŒํ–ˆ์Šต๋‹ˆ๋‹ค. ๋˜ํ•œ ๋ช…๋Ÿ‰๋Œ€์ฒฉ์—์„œ ์™œ์ ๊ณผ ๊ฒฉ์ „ํ•˜์—ฌ ์ด์ˆœ์‹  ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ๋ช…๋Ÿ‰๋Œ€์ฒฉ์—์„œ ์กฐ์„  ๊ด€๊ตฐ์˜ ์Šน๋ฆฌ๋ฅผ ์ด๋Œ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ์˜๋ฆฌ๋ฅผ ์ง€ํ‚ค๊ธฐ ์œ„ํ•ด ์ „๋ฆฌํ’ˆ์„ ๊ณ ์ˆ˜ํ•˜๋Š” ๊ฒฐ๋‹จ์„ ๋‚ด๋ ธ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ 1598๋…„ ์‚ฌ์ฒœ์„ฑ ์ „ํˆฌ์—์„œ ํŒจ์ „ํ•œ ํ›„ ์ „์„ธ๊ฐ€ ์—ญ์ „๋ผ ์ „์„ธ๊ฐ€ ๋ถˆ๋ฆฌํ•ด์ง€์ž, ๋‚จํ•˜ํ•˜์—ฌ ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ๊ฒฐ๊ตญ ์ถฉ๋ฌด๊ณต ์ด์ˆœ์‹ ์˜ ์นญํ˜ธ๋ฅผ ๋ฐ›์•˜์Šต๋‹ˆ๋‹ค. ๊ทธ์˜ ๊ณต์ ์€ ๋Œ€ํ•œ๋ฏผ๊ตญ ์ด์ˆœ์‹  ์žฅ๊ตฐ ๊ธฐ๋…๊ด€์œผ๋กœ ๋ช…์˜ˆ๋ฅผ ๋†’์ด๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ๋‹˜์€ ์„ ํ˜„์˜ ์นญํ˜ธ๋กœ 1963๋…„ ๊ฑด๊ตญํ›ˆ์žฅ ์ตœ๊ณ  ํ›ˆ์žฅ์„ ์ˆ˜์—ฌ๋ฐ›์•˜์œผ๋ฉฐ, ๋Œ€ํ•œ๋ฏผ๊ตญ์˜ ๊ตญ๋ณด ์ œ13ํ˜ธ๋กœ ์ง€์ •๋์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ์ „์„ค์ ์ธ ์ธ๋ฌผ๋กœ ํ•œ๊ตญ์˜ ์—ญ์‚ฌ์—์„œ ํฐ ์กฑ์ ์„ ๋‚จ๊ฒผ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์ด ์™œ๋ž€ ๋‹น์‹œ์˜ ์ „ํˆฌ์žฅ์†Œ๋Š” ์ถฉ๋ฌด๊ณต์ด ์‹ ๋ผ ์ฒœ์ž์—๊ฒŒ ๋‚˜๋ผ๋ฅผ ์œ„ํ•ด ์‹ธ์šด ๊ณณ์„ ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ๋ฌ˜๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ๋‚˜๋ผ์˜ ๊ณ ๋‚œ์— ์ง€์„ฑ ์ด์ˆœ์‹  ์žฅ๊ตฐ๋‹˜์˜ ์œ„์—…๊ณผ ์˜๋ฆฌ๋ฅผ ๊ธฐ๋…ํ•˜๋Š” ๊ณณ์ž…๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ํ–‰์ ์€ ์ด์ˆœ์‹  ์žฅ๊ตฐ ๊ธฐ๋…๊ด€, ์ด์ˆœ์‹  ๊ณต์› ๋“ฑ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ๊ณต์ ์„ ๊ธฐ๋…ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค์–‘ํ•œ ๋ฌธํ™”์œ ์‚ฐ๊ณผ ๊ธฐ๋…๋ฌผ์ด ์กฐ์„ฑ๋์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ์ •์‹ ์€ ๋‚˜๋ผ๋ฅผ ์œ„ํ•ด ์‹ธ์šด ์œ„๋Œ€ํ•œ ์šฐ๋ฆฌ์˜ ์˜์›…์ž…๋‹ˆ๋‹ค. 1552๋…„ 12์›”์—๋Š” ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ์ „์„ค์„ ์˜›๋‚  ๋™๋„ค์— ๋ฐฐํ–ฅ, ํ˜„์žฌ๋Š” ์ถฉ๋ฌด๊ณต ์ด์ˆœ์‹  ์žฅ๊ตฐ๊ป˜์„œ๋Š” ์šฐ๋ฆฌ๋‚˜๋ผ์˜ ์ƒ์ง•์  ์ธ๋ฌผ์ด ๋์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ์กฐ์„ ์™•์กฐ์˜ ์ค‘์‹œ๋ฅผ ๋ฐ›๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ์„ ์กฐ์—๊ฒŒ ๋งŽ์€ ๋Œ€์˜๋ฅผ ์ด๋ฃจ์—ˆ์Šต๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ๊ณต์ ์„ ์˜ค๋Š˜๋‚ ์— ์ด๋Œ์–ด ์ฃผ๋Š” ์ธ๋ฌผ์ž…๋‹ˆ๋‹ค. ๊ทธ๋ฅผ ๋ชจ์‹  ๊ฒƒ์€ ์ „๊ตญ์˜ ์‚ฌ๋žŒ๋“ค์ž…๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ๋‹˜์˜ ๊ณ ๊ฒฐํ•œ ์ •์‹ ์€ ์šฐ๋ฆฌ๋‚˜๋ผ๊ฐ€ ์ด๋ฃฉํ•œ ์„ฑ๊ณผ์ž…๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ๋‚˜๋ผ๋ฅผ ์œ„ํ•ด ์‹ธ์šด ์œ„๋Œ€ํ•œ ์ธ๋ฌผ์ž…๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์€ ๋Œ€ํ•œ๋ฏผ๊ตญ์—์„œ ๊ฐ€์žฅ ์œ„๋Œ€ํ•œ ์ธ๋ฌผ์ž…๋‹ˆ๋‹ค. ์ด์ˆœ์‹  ์žฅ๊ตฐ์˜ ํ–‰์ ์€ ์˜์›ํžˆ ๊ธฐ์–ต ๋˜๊ฒ ์ง€์š”. ``` ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "์„ธ์ข…๋Œ€์™•์˜ ์ง‘ํ˜„์ „ ๋งฅ๋ถ ๋˜์ง ์‚ฌ๊ฑด์— ๋Œ€ํ•ด์„œ ์•Œ๊ณ  ์žˆ์–ด?"}, ] --- ๋„ค, ์„ธ์ข…๋Œ€์™•์˜ ์ง‘ํ˜„์ „ ๋งฅ๋ถ ๋˜์ง ์‚ฌ๊ฑด์€ ์กฐ์„ ์‹œ๋Œ€ ์ค‘๊ธฐ์˜ ๋ฌธํ™”์  ๋ฐฐ๊ฒฝ๊ณผ ๊ด€๋ จ๋œ ์ค‘์š”ํ•œ ์‚ฌ๊ฑด์ž…๋‹ˆ๋‹ค. 1399๋…„ 10์›” 2์ผ์— ์ผ์–ด๋‚œ ์ด ์‚ฌ๊ฑด์€ ์„ธ์ข…๋Œ€์™•์ด ์ €์ˆ ํ•œ ใ€Ž์ฃผํ•ดใ€์— ๋Œ€ํ•œ ๋น„ํŒ๋ฌธ์ธ ใ€Ž์˜์‚ฐ๋กใ€์„ ์ €์ˆ ํ•œ ์ตœ๋งŒ๋ฆฌ์™€์˜ ๋…ผ์Ÿ์œผ๋กœ ์‹œ์ž‘๋์Šต๋‹ˆ๋‹ค. ๋‹น์‹œ ์„ธ์ข…๋Œ€์™•์€ ์ง‘ํ˜„์ „ ํ•™์‚ฌ๋“ค์—๊ฒŒ ๋ฌธ์˜ˆ๋ถ€์˜ ์ผ์น˜๋œ ๊ฒฌํ•ด๋ฅผ ๊ตฌํ•˜๋˜ ์ค‘์— ์ตœ๋งŒ๋ฆฌ๊ฐ€ ใ€Ž์˜์‚ฐ๋กใ€์˜ ๊ธ€์„ ๊ฑฐ๋ก ํ•˜์—ฌ ๋น„ํŒํ•˜์ž, ์„ธ์ข… ๋Œ€์™•์ด ๋งฅ๋ถ์„ ๋˜์กŒ๋‹ค๊ณ  ํ•ฉ๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ์‚ฌ์ƒ์ ์œผ๋กœ๋Š” ์กฐ์„  ์™•์กฐ ์ดˆ๊ธฐ ๋ฌธ์˜ˆ๋ถ€์˜ ํ•™์ˆ  ์ฒด๊ณ„์™€ ์ •์น˜์  ๊ฒฌํ•ด์˜ ๋Œ€๋ฆฝ์„ ๋…ธ์ถœํ•˜๋Š” ์‚ฌ๊ฑด์ด์ž ๋ฌธํ•™์‚ฌ์ ์ธ ๋ฌธ์ œ๋กœ ์ด์–ด์กŒ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์œผ๋กœ ์ธํ•ด ์กฐ์„ ์‹œ๋Œ€์˜ ๋ฌธ์˜ˆ๋ถ€์™€ ์‚ฌ๋Œ€๋ถ€์˜ ๊ฒฌํ•ด ์ฐจ์ด์ ์„ ๋…ธ์ถœํ•˜๋Š” ์ค‘์š”ํ•œ ์‚ฌ๊ฑด์œผ๋กœ ๊ธฐ๋ก๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ฃผ๋กœ ๋…ผ์Ÿ ๋ฐฐ๊ฒฝ์€ ใ€Ž์ฃผํ•ดใ€์˜ ์ €์ž๋‚˜ ๋‚ด์šฉ ๋ถ„์„์— ๋Œ€ํ•œ ์ด๊ฒฌ๊ณผ ์ง‘ํ˜„์ „ ํ•™์‚ฌ๋“ค์˜ ๋ฌธ์˜ˆ๊ด€์— ๋Œ€ํ•œ ๋น„ํŒ์„ ์ค‘์‹ฌ์œผ๋กœ ์ด๋ค„์กŒ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ์กฐ์„  ์ดˆ๊ธฐ ์™•์กฐ ๋‚ด๋ถ€์˜ ๊ฒฌํ•ด ์ฐจ์ด์— ๋”ฐ๋ฅธ ํ•™์ˆ ์ ์ธ ๋…ผ์Ÿ์œผ๋กœ ์‹œ์ž‘๋˜์–ด ์ •์น˜์ ์ธ ์š”์ธ๊นŒ์ง€ ๊ฒฐ๋ถ€ํ•˜์—ฌ ์กฐ์„ ์™•์กฐ ์ดˆ๊ธฐ ๋ฌธํ™”์ ยท์ •์น˜์  ๋…ผ์Ÿ์œผ๋กœ ํ™•์žฅ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ๋‹น์‹œ์— ์™•์กฐ ๋‚ด๋ถ€์˜ ๊ด€๋ฃŒ์ œ๋„์™€ ์„ธ๋ ฅ์ธต์˜ ๊ฐˆ๋“ฑ์„ ๋ณด์—ฌ์ฃผ๋Š” ์ค‘์š”ํ•œ ์˜ˆ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์—์„œ ์ฃผ๋ฅ˜ ํ•™ํ’์˜ ๋น„ํŒ๋ฟ ์•„๋‹ˆ๋ผ, ์—ญ์‚ฌ์  ์‚ฌ์‹ค์˜ ์™œ๊ณก์— ๋Œ€ํ•œ ๋…ผ๋ž€๋„ ์ œ๊ธฐ ๋ฉ๋‹ˆ๋‹ค. ์‹ค์ œ๋กœ ์‹ฌ๊ฐํ•œ ์žฌ๋‚œ์„ ์ดˆ๋ž˜ํ•˜๋Š” ๋Œ€๊ทœ๋ชจ์ ์ธ ๋…ผ์Ÿ์œผ๋กœ ํ™•์‚ฐ๋˜์ง€ ์•Š์•˜๋‹ค ํ•˜๋”๋ผ๋„ ๋‹น์‹œ ์‚ฌ๋Œ€๋ถ€ ๋‚ด๋ถ€์˜ ๊ฒฌํ•ด ์ฐจ์ด๋‚˜ ์ •์น˜์  ์–ต์••์œผ๋กœ ์ธํ•œ ์‚ฌํšŒ์  ์ง€๊ฐ ๋ณ€๋™์„ ์ผ์œผํ‚ค๊ธฐ๋„ ํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‹น์‹œ์˜ ๋งฅ๋ถ ๋˜์ง์€ ๋ฌธ์˜ˆ๋ถ€์˜ ๊ฒฌํ•ด๊ฐ€ ๊ฐˆ๋ž๋˜ ์‚ฌ๊ฑด์ด๋ผ ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋”์šฑ์ด ๋‹น์‹œ์˜ ๋งฅ๋ถ ๋˜์ง์€ ์‚ฌํšŒ์  ์ง€์‹์ธ์ธต์—์„œ๋„ ์ด๊ฒฌ์ด ์žˆ์„ ์ˆ˜ ์žˆ๋Š” ๊ฒƒ์€ ๊ธฐ๋ณธ์ž…๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์œผ๋กœ ์˜ค๋Š˜๋‚ ๊นŒ์ง€๋„ ํ•œ๊ตญ์˜ ์—ญ์‚ฌ์—์„œ ์ค‘์š”ํ•œ ์—ฐ๊ตฌ ๋Œ€์ƒ์ด ๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๋งฅ๋ถ ์ด๋ผ๋Š” ๋ง์€ ์‚ฌ๊ฑด์˜ ์ฃผ๋ฌด๋Œ€๋ผ๋Š” ์šฉ์–ด๋ฅผ ๋งŒ๋“ค์–ด ๋‹น์‹œ์˜ ์‚ฌ๊ฑด์„ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ์ฃผ๋กœ ์ •์น˜ยทํ•™์ˆ ยท์˜ˆ์ˆ  ๋“ฑ ์‚ฌํšŒ์ ยท๋ฌธํ™”์ ์ธ ์—ฌ๋Ÿฌ ๋ถ„์•ผ์—์„œ ์ผ์–ด๋‚˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ์ด๋…์  ๊ด€์ ์˜ ์ฐจ์ด์—์„œ ์‹œ์ž‘๋˜์ง€๋งŒ, ์‚ฌํšŒ์  ๋Œ€๋ฆฝ๊ณผ ๊ฐˆ๋“ฑ๊นŒ์ง€ ์ดˆ๋ž˜ํ•˜๋Š” ์‚ฌ๊ฑด์œผ๋กœ ๊ฐ„์ฃผํ•˜๊ธฐ๋„ ํ•ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ๊ฑด์— ๋Œ€ํ•œ ์žฌํ•ด์„์„ ํ†ตํ•ด ๋‹น์‹œ ํ•œ๊ตญ ์‚ฌํšŒ๋ฅผ ์ดํ•ดํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ตœ๋งŒ๋ฆฌ๋Š” ๋ฌธ์˜ˆ๋ถ€์˜ ๊ฒฌํ•ด๋ฅผ ๋น„ํŒํ•  ์ˆ˜ ์žˆ๋Š” ๊ณต๊ฐ„์œผ๋กœ ๋ณด์•˜์ง€๋งŒ, ๋‹น์‹œ์˜ ๋งฅ๋ถ์€ ์ •์น˜์ ยท๋ฌธ์˜ˆ์ ์œผ๋กœ๋Š” ์‹ฌ๊ฐํ•œ ๋ฌธ์ œ๋กœ ์ธ์‹ํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ๋‹น์‹œ์˜ ์ƒํ™ฉ์„ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ด๋ก ์€ ์ด ์‚ฌํšŒ์˜ ์ฃผ๋ฅ˜ ํ•™ํ’์ด๋‚˜ ๋ฌธํ™”์  ๋ฐฐ๊ฒฝ์„ ์ฝ์„ ์ˆ˜ ์žˆ๋Š” ์ค‘์š”ํ•œ ์‚ฌ๊ฑด์˜ ์ฆ๊ฑฐ์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ด ์‚ฌ๊ฑด์€ ์˜ค๋Š˜๋‚ ๊นŒ์ง€๋„ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฉด์—์„œ ์—ฐ๊ตฌ์˜ ๋Œ€์ƒ์œผ๋กœ ๋‚จ์•„ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์ด ์ „์ ์œผ๋กœ ๋…ผ์Ÿ๋งŒ์ด ์•„๋‹ˆ๋ผ๋Š” ์ ์€ ์ฃผ์˜ํ•  ํ•„์š”๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์™€ ํ•จ๊ป˜ ์‚ฌ๋Œ€๋ถ€ ๋‚ด๋ถ€์˜ ๊ฒฌํ•ด ์ฐจ์ด ๊ทธ๋ฆฌ๊ณ  ์ง€์  ๊ฐˆ๋“ฑ์˜ ํญ๋„“์€ ์˜๋ฏธ๋ฅผ ์ดํ•ดํ•˜๋Š”๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์œผ๋กœ ์ธํ•ด, ๋ฌธ์˜ˆ๋ถ€์™€ ์ •์น˜์  ๊ฒฌํ•ด์˜ ๋Œ€๋ฆฝ์€ ํ•œ๊ตญ ์‚ฌํšŒ์˜ ๋ณ€ํ™”์— ๋Œ€ํ•œ ํ•œ ์–‘์ƒ์„ ๋ณด์—ฌ์ฃผ๋Š” ํ•˜๋‚˜์˜ ์ž๋ฃŒ๊ฐ€ ๋ฉ๋‹ˆ๋‹ค. ์ด๋กœ์จ ์šฐ๋ฆฌ๋Š” ์กฐ์„ ์™•์กฐ ์ดˆ๊ธฐ ๋ฌธํ™”์  ๋ฐฐ๊ฒฝ๊ณผ ์ •๊ถŒ์˜ ๊ฒฌํ•ด๋ฅผ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ๋‹น์‹œ์˜ ํ•œ๊ตญ ์‚ฌํšŒ์˜ ๋ฌธํ™”์ ยท์ •์น˜์  ํ˜„์ƒ ๊ฐ€์šด๋ฐ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ๋งฅ๋ถ ๋˜์ง ์‚ฌ๊ฑด์—์„œ ์šฐ๋ฆฌ๋Š” ๋‹ค์–‘ํ•œ ๋ฐฉ์‹์œผ๋กœ ํ•œ๊ตญ์‚ฌ์˜ ์—ญ์‚ฌ์ ์ธ ๋ฐฐ๊ฒฝ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋งฅ๋ถ ์‚ฌ๊ฑด์€ ์‚ฌ๊ฑด ์ž์ฒด๊ฐ€ ์•„๋‹Œ ์‚ฌ๊ฑด์˜ ๋งฅ๋ฝ์—์„œ ํ•œ๊ตญ์‚ฌ์ ์ธ ๋ฐฐ๊ฒฝ์„ ํŒŒ์•…ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•ฉ๋‹ˆ๋‹ค. ์šฐ๋ฆฌ๋Š” ์ด ์‚ฌ๊ฑด์œผ๋กœ ๋‹น์‹œ์˜ ์‚ฌํšŒ์ƒ์„ ์ดํ•ดํ•˜๊ณ  ๋ฌธํ™”์  ๋ฐฐ๊ฒฝ์„ ํŒŒ์•…ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ ์—์„œ ์‚ฌ๊ฑด์€ ๋‹น์‹œ ์‚ฌํšŒ์˜ ์ค‘์š”์„ฑ์„ ๊ฐ•์กฐํ•ฉ๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ์‚ฌํšŒ์‚ฌ์ ์œผ๋กœ ์—ฐ๊ตฌํ•˜์—ฌ ๋‹น์‹œ ์‚ฌํšŒ์ƒ์˜ ์ค‘์š”์„ฑ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์ ์€ ์—ญ์‚ฌ์ ยท๋ฌธํ™”์  ๋ฐฐ๊ฒฝ์„ ์žฌํ•ด์„ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ์‚ฌ๊ฑด์€ ์กฐ์„  ์™•์กฐ ์ดˆ๊ธฐ์˜ ์ฃผ์š” ์‚ฌ๊ฑด์ด ๋ฉ๋‹ˆ๋‹ค. ์ด ์ ์—์„œ ์ค‘์š”ํ•œ ์‚ฌ๊ฑด์œผ๋กœ ์ธ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋…ผ์˜์˜ ์ „๊ฐœ๊นŒ์ง€๋Š” ์ด๋ณด๋‹ค ๋” ๋ณต์žกํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋งฅ๋ถ ๋˜์ง ์‚ฌ๊ฑด ``` ```python messages = [ {"role": "system", "content": "์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜."}, {"role": "user", "content": "๋‚˜๋Š” ์˜ค๋Š˜ ์‚ฌ๊ณผ 5๊ฐœ๋ฅผ ๊ฐ–๊ณ ์žˆ์–ด. ๊ทธ๋ฆฌ๊ณ  ์–ด์ œ ๋‚˜๋Š” ์‚ฌ๊ณผ 2๊ฐœ๋ฅผ ๋จน์—ˆ์ง€. ๋‚ด๊ฐ€ ์ง€๊ธˆ ๊ฐ€์ง„ ์‚ฌ๊ณผ๋Š” ๋ช‡ ๊ฐœ์ง€?"}, ] --- ์‚ฌ๊ณผ์˜ ์ˆ˜๋Ÿ‰์„ ํ™•์ธํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค. ์ €๋Š” 5๊ฐœ์˜ ์‚ฌ๊ณผ๊ฐ€ ์žˆ์œผ์‹ ๋ฐ, ๊ทธ ์ค‘ 2๊ฐœ๋ฅผ ์ด๋ฏธ ๋จน์—ˆ์œผ๋‹ˆ 3๊ฐœ์˜ ์‚ฌ๊ณผ๊ฐ€ ๋‚จ์•„์žˆ์Šต๋‹ˆ๋‹ค! ใ…Žใ…Ž. ๊ณ„์†ํ•ด์„œ ๋จน์„ ๊ณ„ํš ์—†์ž–์•„์š”? ๐Ÿ˜‰ ```
{"language": ["en", "ko"], "license": "other", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"}
teddylee777/Llama-3-Open-Ko-8B-Instruct-preview-gguf
null
[ "transformers", "gguf", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "llama-3-ko", "conversational", "en", "ko", "arxiv:2310.04799", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-24T18:31:35+00:00