modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
unknown
card
stringlengths
1
901k
failspy/Llama-3-70B-Instruct-abliterated-v3
failspy
"2024-05-30T12:22:41Z"
1,414
17
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-19T19:38:23Z"
--- library_name: transformers license: llama3 --- # Llama-3-70B-Instruct-abliterated-v3 Model Card ## [Get v3.5 of this model instead!](https://huggingface.co/failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5) [My Jupyter "cookbook" to replicate the methodology can be found here, refined library coming soon](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb) This is [meta-llama/Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) with orthogonalized bfloat16 safetensor weights, generated with a refined methodology based on that which was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more. ## Hang on, "abliteration"? Orthogonalization? Ablation? What is this? TL;DR: This model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal directions orthogonalized out. **TL;TL;DR;DR: It's uncensored in the purest form I can manage -- no new or changed behaviour in any other respect from the original model.** As far as "abliteration": it's just a fun play-on-words using the original "ablation" term used in the original paper to refer to removing features, which I made up particularly to differentiate the model from "uncensored" fine-tunes. Ablate + obliterated = Abliterated Anyways, orthogonalization/ablation are both aspects to refer to the same thing here, the technique in which the refusal feature was "ablated" from the model was via orthogonalization. ## A little more on the methodology, and why this is interesting To me, ablation (or applying the methodology for the inverse, "augmentation") seems to be good for inducing/removing very specific features that you'd have to spend way too many tokens on encouraging or discouraging in your system prompt. Instead, you just apply your system prompt in the ablation script against a blank system prompt on the same dataset and orthogonalize for the desired behaviour in the final model weights. > Why this over fine-tuning? Ablation is much more surgical in nature whilst also being effectively executed with a _lot_ less data than fine-tuning, which I think is its main advantage. As well, and its most valuable aspect is it keeps as much of the original model's knowledge and training intact, whilst removing its tendency to behave in one very specific undesireable manner. (In this case, refusing user requests.) Fine tuning is still exceptionally useful and the go-to for broad behaviour changes; however, you may be able to get close to your desired behaviour with very few samples using the ablation/augmentation techniques. It may also be a useful step to add to your model refinement: orthogonalize -> fine-tune or vice-versa. I haven't really gotten around to exploring this model stacked with fine-tuning, I encourage others to give it a shot if they've got the capacity. > Okay, fine, but why V3? There's no V2 70B? Well, I released a V2 a while back for 8B under Cognitive Computations. It ended up being not worth it to try V2 with 70B, I wanted to refine the model before wasting compute cycles on what might not even be a better model. I am however quite pleased about this latest methodology, it seems to have induced fewer hallucinations. So to show that it's a new fancy methodology from even that of the 8B V2, I decided to do a Microsoft and double up on my version jump because it's *such* an advancement (or so the excuse went, when in actuality it was because too many legacy but actively used Microsoft libraries checked for 'Windows 9' in the OS name to detect Windows 95/98 as one.) ## Quirkiness awareness notice This model may come with interesting quirks, with the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. If you manage to develop further improvements, please share! This is really the most basic way to use ablation, but there are other possibilities that I believe are as-yet unexplored. Additionally, feel free to reach out in any way about this. I'm on the Cognitive Computations Discord, I'm watching the Community tab, reach out! I'd love to see this methodology used in other ways, and so would gladly support whoever whenever I can.
lmstudio-community/Codestral-22B-v0.1-GGUF
lmstudio-community
"2024-06-05T21:35:08Z"
1,414
5
null
[ "gguf", "code", "text-generation", "base_model:mistralai/Codestral-22B-v0.1", "license:apache-2.0", "region:us" ]
text-generation
"2024-05-29T19:01:50Z"
--- license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code license: apache-2.0 quantized_by: bartowski pipeline_tag: text-generation lm_studio: param_count: 22b use_case: coding release_date: 29-05-2024 model_creator: mistralai prompt_template: Mistral Instruct system_prompt: none base_model: mistral original_repo: mistralai/Codestral-22B-v0.1 base_model: mistralai/Codestral-22B-v0.1 --- ## ๐Ÿ’ซ Community Model> Codestral 22B v0.1 by Mistral AI *๐Ÿ‘พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*. **Model creator:** [Mistral AI](https://huggingface.co/mistralai)<br> **Original model**: [Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)<br> **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3024](https://github.com/ggerganov/llama.cpp/releases/tag/b3024)<br> ## Model Summary: Codestral is a brand new coding model released by the Mistral team. This 22B model is the first of its size and the first ever specialized model released by this team.<br> Supporting both instruction prompting and popular Fill in the Middle (FIM) tokens for predictions, this model should be all around great for all your coding tasks. ## Prompt template: Choose the `Mistral Instruct` preset in your LM Studio. Under the hood, the model will see a prompt that's formatted like so: ``` <s>[INST] {prompt} [/INST]</s> ``` This model also supports the following FIM tokens: `[PREFIX]`, `[SUFFIX]` ## Technical Details Codestral 22B 0.1 is trained on a dataset of 80+ programming languages including of course Python, Java, C++, Javascript, and Bash. It supports both instruction querying as well as Fill in the Middle querying. More details and benchmark information can be found on their blogpost here: https://mistral.ai/news/codestral/ ## Special thanks ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) ๐Ÿ™ Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes. ## Disclaimers LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
ml6team/mt5-small-german-query-generation
ml6team
"2022-04-27T06:24:37Z"
1,413
2
transformers
[ "transformers", "pytorch", "mt5", "text2text-generation", "query-generation", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2022-04-26T13:51:02Z"
--- language: - de tags: - pytorch - query-generation widget: - text: "Das Lama (Lama glama) ist eine Art der Kamele. Es ist in den sรผdamerikanischen Anden verbreitet und eine vom Guanako abstammende Haustierform." example_title: "Article 1" license: apache-2.0 metrics: - Rouge-Score --- # mt5-small-german-query-generation ## Model description: This model was created with the purpose to generate possible queries for a german input article. For this model, we finetuned a multilingual T5 model [mt5-small](https://huggingface.co/google/mt5-small) on the [MMARCO dataset](https://huggingface.co/datasets/unicamp-dl/mmarco) the machine translated version of the MS MARCO dataset. The model was trained for 1 epoch, on 200,000 unique queries of the dataset. We trained the model on one K80 GPU for 25,000 iterations with following parameters: - learning rate: 1e-3 - train batch size: 8 - max input sequence length: 512 - max target sequence length: 64 ## Model Performance: Model evaluation was done on 2000 evaluation paragraphs of the dataset. Mean [f1 ROUGE scores](https://github.com/pltrdy/rouge) were calculated for the model. | Rouge-1 | Rouge-2 | Rouge-L | |---|---|---| |0.162 | 0.052 | 0.161 |
euclaise/Ferret_7B
euclaise
"2023-11-21T17:40:36Z"
1,413
7
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "dataset:euclaise/MiniCoT", "dataset:euclaise/SciCoT", "dataset:euclaise/symtune_mini", "dataset:euclaise/mathoverflow-accepted", "dataset:euirim/goodwiki", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-28T12:12:56Z"
--- license: other datasets: - euclaise/MiniCoT - euclaise/SciCoT - euclaise/symtune_mini - euclaise/mathoverflow-accepted - euirim/goodwiki --- A pre-finetuning finetuned version of Mistral 7B 0.1, focused on CoT reasoning tasks. Probably decent at reasoning, but also probably not great as a chat assistant- it's designed to be finetuned further to give it a friendlier style. As such, it is intentionally somewhat undertrained. Current benchmarks aren't great for instruct models, so I've temporarily omitted them. I'm working on a benchmark suite for instruct models though, and will update this with scores when that is released. Uses ChatML prompt formatting. I reserve no rights to the model. To the extent possible under law, I release it as public domain. However, the datasets used have various licenses that may impact how the model may be used in your jurisdiction.
timm/tf_efficientnetv2_xl.in21k_ft_in1k
timm
"2023-04-27T22:18:18Z"
1,412
3
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-21k", "arxiv:2104.00298", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-13T00:20:45Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-21k --- # Model card for tf_efficientnetv2_xl.in21k_ft_in1k A EfficientNet-v2 image classification model. Trained on ImageNet-21k and fine-tuned on ImageNet-1k in Tensorflow by paper authors, ported to PyTorch by Ross Wightman. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 208.1 - GMACs: 52.8 - Activations (M): 139.2 - Image size: train = 384 x 384, test = 512 x 512 - **Papers:** - EfficientNetV2: Smaller Models and Faster Training: https://arxiv.org/abs/2104.00298 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k - **Original:** https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tf_efficientnetv2_xl.in21k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnetv2_xl.in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 192, 192]) # torch.Size([1, 64, 96, 96]) # torch.Size([1, 96, 48, 48]) # torch.Size([1, 256, 24, 24]) # torch.Size([1, 640, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tf_efficientnetv2_xl.in21k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1280, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @inproceedings{tan2021efficientnetv2, title={Efficientnetv2: Smaller models and faster training}, author={Tan, Mingxing and Le, Quoc}, booktitle={International conference on machine learning}, pages={10096--10106}, year={2021}, organization={PMLR} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
castorini/rank_vicuna_7b_v1_fp16
castorini
"2023-09-27T08:17:34Z"
1,412
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "information retrieval", "reranker", "en", "arxiv:2307.09288", "arxiv:2309.15088", "license:llama2", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-27T06:43:12Z"
--- inference: false license: llama2 language: - en tags: - information retrieval - reranker --- # RankVicuna (FP16) Model Card ## Model Details RankVicuna is a chat assistant trained by fine-tuning Llama 2 on user-shared conversations collected from ShareGPT. - **Developed by:** [Castorini](https://github.com/castorini) - **Model type:** An auto-regressive language model based on the transformer architecture - **License:** Llama 2 Community License Agreement - **Finetuned from base model:** [Llama 2](https://arxiv.org/abs/2307.09288) This specific model is a 7B variant and is trained with data augmentation. It is also worth noting that it is converted to FP16. ### Model Sources - **Repository:** https://github.com/castorini/rank_llm - **Paper:** https://arxiv.org/abs/2309.15088 ## Uses The primary use of RankVicuna is research at the intersection of large language models and retrieval. The primary intended users of the model are researchers and hobbyists in natural language processing and information retrieval. ## Training Details RankVicuna is finetuned from `lmsys/vicuna-7b-v1.5` with supervised instruction fine-tuning. ## Evaluation RankVicuna is currently evaluated on DL19/DL20. See more details in our [paper](https://arxiv.org/pdf/2309.15088.pdf).
TeamUNIVA/Komodo_6B_v3.0.0
TeamUNIVA
"2024-03-04T11:27:20Z"
1,412
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "ko", "en", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-04T10:58:49Z"
--- license: apache-2.0 language: - ko - en --- # Base Model beomi/Yi-Ko-6B ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "TeamUNIVA/Komodo_6B_v3.0.0" model = AutoModelForCausalLM.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) text = '''<|system|> ๋‹น์‹ ์€ ์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์— ์นœ์ ˆํ•˜๊ฒŒ ๋‹ต๋ณ€์„ ํ•˜๋Š” ์ฑ—๋ด‡์ž…๋‹ˆ๋‹ค. <|user|> ์•ˆ๋…•ํ•˜์„ธ์š”? <|bot|> ''' inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
nyunai/nyun-c2-llama3-56B
nyunai
"2024-06-14T04:51:29Z"
1,412
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "custom_code", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-14T04:34:25Z"
--- license: llama3 --- # ๐Ÿ”น Key Highlights: - 20% Fewer Parameters: nyun-c2-llama3-56B comprises approximately 20% fewer parameters than the popular Llama-3-70B. - Better Performance: Despite having far fewer parameters, this model has better performance than Llama-3-70B. - No Fine-Tuning Required: This model undergoes no fine-tuning, showcasing the raw potential of our optimization techniques. ## Pipeline and Collaboration For insights into the pipeline and the list of methods used to optimize these models, check out our PruneGPT repository (https://github.com/nyunAI/PruneGPT). We invite companies and organizations interested in joining forces with us to release more such open-source variants to reach out at [email protected]. ### Model Performance | Dataset | nyun-c2-llama3-56B | Meta-Llama3-70B | Meta-Llama2-70B | MBZUAI K2-65B | | --- | --- | --- | --- | --- | | MMLU (5-shot) | 78.4 | 79.5 | 69.7 | 67.9 | | Winogrande (5-shot) | 85.5 | 83.1 | 81.8 | 77.0 | | BoolQ (0-shot) | 85.1 | 79.0 | 73.1 | 83.0 | | Hellaswag (10-shot) | 86.9 | 88.0 | 86.9 | 85.5 | | Arc Challenge (25-shot) | 66.0 | 68.8 | 67.2 | 64.8 | | GSM8K (5-shot) | 76.8 | 76.9 | 52.6 | 50.2 | | Average | 79.8 | 79.2 | 71.9 | 71.4 | - **Developed by:** [Nyun AI](https://nyunai.com/) - **Repository:** [Github](https://github.com/nyunAI/PruneGPT)
BEE-spoke-data/smol_llama-81M-tied
BEE-spoke-data
"2023-11-20T16:14:28Z"
1,411
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "smol_llama", "llama2", "en", "dataset:JeanKaddour/minipile", "dataset:pszemraj/simple_wikipedia_LM", "dataset:BEE-spoke-data/wikipedia-20230901.en-deduped", "dataset:mattymchen/refinedweb-3m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-10-26T08:15:17Z"
--- license: apache-2.0 thumbnail: https://i.ibb.co/TvyMrRc/rsz-smol-llama-banner.png language: - en inference: parameters: max_new_tokens: 64 do_sample: true temperature: 0.8 repetition_penalty: 1.15 no_repeat_ngram_size: 4 eta_cutoff: 0.0006 renormalize_logits: true widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: Kennesaw State University is a public example_title: Kennesaw State University - text: >- Bungie Studios is an American video game developer. They are most famous for developing the award winning Halo series of video games. They also made Destiny. The studio was founded example_title: Bungie - text: The Mona Lisa is a world-renowned painting created by example_title: Mona Lisa - text: >- The Harry Potter series, written by J.K. Rowling, begins with the book titled example_title: Harry Potter Series - text: >- Question: I have cities, but no houses. I have mountains, but no trees. I have water, but no fish. What am I? Answer: example_title: Riddle - text: The process of photosynthesis involves the conversion of example_title: Photosynthesis - text: >- Jane went to the store to buy some groceries. She picked up apples, oranges, and a loaf of bread. When she got home, she realized she forgot example_title: Story Continuation - text: >- Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, and another train leaves Station B at 10:00 AM and travels at 80 mph, when will they meet if the distance between the stations is 300 miles? To determine example_title: Math Problem - text: In the context of computer programming, an algorithm is example_title: Algorithm Definition pipeline_tag: text-generation tags: - smol_llama - llama2 datasets: - JeanKaddour/minipile - pszemraj/simple_wikipedia_LM - BEE-spoke-data/wikipedia-20230901.en-deduped - mattymchen/refinedweb-3m --- # smol_llama-81M-tied <img src="smol-llama-banner.png" alt="banner" style="max-width:80%; height:auto;"> A small 81M param (total) decoder model, enabled through tying the input/output embeddings. This is the first version of the model. - 768 hidden size, 6 layers - standard multi-head attention (24 heads), context length 1024 - input/output embeddings **are tied** - train-from-scratch ## Notes **This checkpoint** is the 'raw' pre-trained model and has not been tuned to a more specific task. **It should be fine-tuned** before use in most cases. - slightly larger 101M param GQA pretrained version: [here](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) - For the chat version of this model, please [see here](https://youtu.be/dQw4w9WgXcQ?si=3ePIqrY1dw94KMu4) --- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-81M-tied) | Metric | Value | |-----------------------|---------------------------| | Avg. | 24.52 | | ARC (25-shot) | 22.18 | | HellaSwag (10-shot) | 29.33 | | MMLU (5-shot) | 24.06 | | TruthfulQA (0-shot) | 43.97 | | Winogrande (5-shot) | 49.25 | | GSM8K (5-shot) | 0.23 | | DROP (3-shot) | 2.64 |
elyza/ELYZA-japanese-Llama-2-13b-fast
elyza
"2023-12-27T01:41:31Z"
1,411
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ja", "en", "arxiv:2307.09288", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-25T17:14:44Z"
--- license: llama2 language: - ja - en --- ## ELYZA-japanese-Llama-2-13b-fast ![ELYZA-Japanese-Llama2-image](./key_visual.png) ### Model Description **ELYZA-japanese-Llama-2-13b** ใฏใ€ Llama 2ใ‚’ใƒ™ใƒผใ‚นใจใ—ใฆๆ—ฅๆœฌ่ชž่ƒฝๅŠ›ใ‚’ๆ‹กๅผตใ™ใ‚‹ใŸใ‚ใซ่ฟฝๅŠ ไบ‹ๅ‰ๅญฆ็ฟ’ใ‚’่กŒใฃใŸใƒขใƒ‡ใƒซใงใ™ใ€‚ ่ฉณ็ดฐใฏ [Blog่จ˜ไบ‹](https://note.com/elyza/n/n5d42686b60b7) ใ‚’ๅ‚็…งใ—ใฆใใ ใ•ใ„ใ€‚ ### Usage ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "elyza/ELYZA-japanese-Llama-2-13b-fast" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, use_cache=True, device_map="auto", low_cpu_mem_usage=True, ) model.eval() text = "่‡ช็„ถ่จ€่ชžๅ‡ฆ็†ใจใฏใ€" token_ids = tokenizer.encode(text, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( token_ids.to(model.device), max_new_tokens=256, pad_token_id=tokenizer.pad_token_id, eos_token_id=tokenizer.eos_token_id, ) output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True) print(output) ``` ### ELYZA-japanese-Llama-2-13b Models | Model Name | Vocab Size | #Params | |:---------------------------------------------|:----------:|:-------:| |[elyza/ELYZA-japanese-Llama-2-13b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b)| 32000 | 13.02B | |[elyza/ELYZA-japanese-Llama-2-13b-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-instruct)| 32000 | 13.02B | |[elyza/ELYZA-japanese-Llama-2-13b-fast](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast)| 44581 | 13.14B | |[elyza/ELYZA-japanese-Llama-2-13b-fast-instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b-fast-instruct)| 44581 | 13.14B | ### Developers - [Akira Sasaki](https://huggingface.co/akirasasaki) - [Masato Hirakawa](https://huggingface.co/m-hirakawa) - [Shintaro Horie](https://huggingface.co/e-mon) - [Tomoaki Nakamura](https://huggingface.co/tyoyo) - [Sam Passaglia](https://huggingface.co/passaglia) - [Daisuke Oba](https://huggingface.co/daisuk30ba) (intern) ### Licence Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ### How to Cite ```tex @misc{elyzallama2023, title={ELYZA-japanese-Llama-2-13b}, url={https://huggingface.co/elyza/ELYZA-japanese-Llama-2-13b}, author={Akira Sasaki and Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Sam Passaglia and Daisuke Oba}, year={2023}, } ``` ### Citations ```tex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
digiplay/bluePencilRealistic_v05
digiplay
"2023-08-02T08:12:37Z"
1,410
6
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-06-19T00:09:45Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: ๐Ÿ’–โ˜บ๏ธLovely Cute Model๐Ÿ’ž https://huggingface.co/bluepen5805/blue_pencil_realistic https://civitai.com/models/88941?modelVersionId=97200 Original Author's DEMO images: ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b6d35ab9-38c0-4a8d-914f-15a66a5ab147/width=1536/01234-20230616194837-1416568985-20-7.5.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/cb29ffcf-1082-4242-b1a4-0573b6a3e1c8/width=1536/00220-20230617155341-380313430-25-7.5.jpeg) Sample image I made : ![5a649728-39f2-4423-9993-a370bb745cef.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/iVwoi1QVZX_jObzUSeIZB.jpeg)
shuttie/Mistral-7B-DadJokes-GGUF
shuttie
"2023-10-13T10:31:16Z"
1,410
10
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
"2023-10-10T09:42:22Z"
--- license: apache-2.0 --- # Mistral-7B-DadJokes-GGUF This is a [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) model fine-tuned on [Reddit-Dad-Jokes dataset](https://huggingface.co/datasets/shuttie/dadjokes) in a GGUF format. It can generate cringe and toxic jokes on any short prompt: ``` [INST] What horse says when she sees an old lady? [/INST] Move over grandma, I'm going to trot! [INST] What is a lawyer's favorite drink? [/INST] Whine. [INST] My obese pet parrot died over the weekend. [/INST] Now I just have a big fat birdcage. [INST] My wife asked why there was a blanket on top of me and my laptop. [/INST] I told her that I was just covering all my bases. [INST] My girlfriend changed after she became a vegetarian. [/INST] She finally lost her meat. ``` ## Used data We use a [Kaggle Reddit Dad Jokes dataset](https://huggingface.co/datasets/shuttie/dadjokes) formatted in a base+punchline tuples. The model task was to predict the punchline given the base. Prompt format is the same as for original Mistral-7B-0.1 model: `[INST] base [/INST] punchline` ## Fine-tuning process The model was fine-tuned with QLORA using the [LLM_QLORA](https://github.com/georgesung/llm_qlora/) trainer script with the following configuration: ```yaml base_model: mistralai/Mistral-7B-v0.1 model_family: llama # if unspecified will use AutoModelForCausalLM/AutoTokenizer model_context_window: 256 # if unspecified will use tokenizer.model_max_length data: type: dadjoke train: "dadjokes/dataset/train.csv" eval: "dadjokes/dataset/test.csv" lora: r: 8 lora_alpha: 32 target_modules: # modules for which to train lora adapters - q_proj - k_proj - v_proj lora_dropout: 0.05 bias: none task_type: CAUSAL_LM trainer: batch_size: 8 gradient_accumulation_steps: 1 warmup_steps: 100 num_train_epochs: 1 learning_rate: 0.0002 # 2e-4 logging_steps: 20 trainer_output_dir: trainer_outputs/ model_output_dir: models/ ``` Fine-tuning took ~70 minutes on a single RTX 4090. ## Running the model locally This model can be run with a [llama-cpp](https://github.com/ggerganov/llama.cpp) on a CPU using the following command: ``` ./main -n 64 -m models/ggml-model-q4_0.gguf -p "[INST] My girlfriend changed after she became a vegetarian. [/INST]" system_info: n_threads = 8 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | sampling: repeat_last_n = 64, repeat_penalty = 1.100000, presence_penalty = 0.000000, frequency_penalty = 0.000000, top_k = 40, tfs_z = 1.000000, top_p = 0.950000, typical_p = 1.000000, temp = 0.800000, mirostat = 0, mirostat_lr = 0.100000, mirostat_ent = 5.000000 generate: n_ctx = 512, n_batch = 512, n_predict = 64, n_keep = 0 [INST] My girlfriend changed after she became a vegetarian. [/INST] She finally lost her meat [end of text] llama_print_timings: load time = 439.38 ms llama_print_timings: sample time = 4.62 ms / 6 runs ( 0.77 ms per token, 1298.98 tokens per second) llama_print_timings: prompt eval time = 1786.76 ms / 18 tokens ( 99.26 ms per token, 10.07 tokens per second) llama_print_timings: eval time = 833.66 ms / 5 runs ( 166.73 ms per token, 6.00 tokens per second) llama_print_timings: total time = 2627.55 ms Log end ``` ## License Apache 2.0
RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf
RichardErkhov
"2024-06-25T08:43:41Z"
1,410
0
null
[ "gguf", "region:us" ]
null
"2024-06-25T08:18:34Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) tau-0.5B-instruct-DPOP - GGUF - Model creator: https://huggingface.co/M4-ai/ - Original model: https://huggingface.co/M4-ai/tau-0.5B-instruct-DPOP/ | Name | Quant method | Size | | ---- | ---- | ---- | | [tau-0.5B-instruct-DPOP.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q2_K.gguf) | Q2_K | 0.23GB | | [tau-0.5B-instruct-DPOP.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.IQ3_XS.gguf) | IQ3_XS | 0.24GB | | [tau-0.5B-instruct-DPOP.IQ3_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.IQ3_S.gguf) | IQ3_S | 0.25GB | | [tau-0.5B-instruct-DPOP.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q3_K_S.gguf) | Q3_K_S | 0.25GB | | [tau-0.5B-instruct-DPOP.IQ3_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.IQ3_M.gguf) | IQ3_M | 0.26GB | | [tau-0.5B-instruct-DPOP.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q3_K.gguf) | Q3_K | 0.26GB | | [tau-0.5B-instruct-DPOP.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q3_K_M.gguf) | Q3_K_M | 0.26GB | | [tau-0.5B-instruct-DPOP.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q3_K_L.gguf) | Q3_K_L | 0.28GB | | [tau-0.5B-instruct-DPOP.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.IQ4_XS.gguf) | IQ4_XS | 0.28GB | | [tau-0.5B-instruct-DPOP.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q4_0.gguf) | Q4_0 | 0.29GB | | [tau-0.5B-instruct-DPOP.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.IQ4_NL.gguf) | IQ4_NL | 0.29GB | | [tau-0.5B-instruct-DPOP.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q4_K_S.gguf) | Q4_K_S | 0.29GB | | [tau-0.5B-instruct-DPOP.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q4_K.gguf) | Q4_K | 0.3GB | | [tau-0.5B-instruct-DPOP.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q4_K_M.gguf) | Q4_K_M | 0.3GB | | [tau-0.5B-instruct-DPOP.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q4_1.gguf) | Q4_1 | 0.3GB | | [tau-0.5B-instruct-DPOP.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q5_0.gguf) | Q5_0 | 0.32GB | | [tau-0.5B-instruct-DPOP.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q5_K_S.gguf) | Q5_K_S | 0.32GB | | [tau-0.5B-instruct-DPOP.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q5_K.gguf) | Q5_K | 0.33GB | | [tau-0.5B-instruct-DPOP.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q5_K_M.gguf) | Q5_K_M | 0.33GB | | [tau-0.5B-instruct-DPOP.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q5_1.gguf) | Q5_1 | 0.34GB | | [tau-0.5B-instruct-DPOP.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q6_K.gguf) | Q6_K | 0.36GB | | [tau-0.5B-instruct-DPOP.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-instruct-DPOP-gguf/blob/main/tau-0.5B-instruct-DPOP.Q8_0.gguf) | Q8_0 | 0.47GB | Original model description: --- license: other language: - en --- # tau-instruct-0.5B-DPOP ## Model Details - **Model Name:** tau-instruct-0.5B-DPOP - **Base Model:** tau-0.5B - **Model Size:** 0.5B parameters - **Model Type:** Instruction-following Language Model - **Training Data**: About 700 high quality preference entries annotated by GPT-4. - **Training Procedure**: The DPO-Positive algorithm introduced by abacusai was used to train this model. ## Model Use tau-instruct-0.5B-DPOP is an instruction-following language model designed to follow user instructions and provide assistance across a wide range of tasks, including but not limited to: - Question answering - Text generation and completion - Mathematical problem solving - Code understanding, generation, and explanation - Reasoning and analysis - Trivia and general knowledge The model's ability to follow instructions, combined with its knowledge in various domains, makes it suitable for applications such as virtual assistants, educational tools, and research aids. ## Performance and Limitations Preliminary evaluations indicate that tau-instruct-0.5B-DPOP exhibits improved performance in following instructions compared to its base model, tau-0.5B. However, the model may still have limitations and biases inherited from its base model and the fine-tuning dataset. Users should be aware that the model's performance may vary depending on the complexity and clarity of the provided instructions. It is essential to evaluate the model's outputs critically and provide feedback to support ongoing improvements. ## Environmental Impact The fine-tuning process for tau-instruct-0.5B-DPOP required additional computational resources, contributing to the model's overall environmental impact. Efforts were made to optimize the fine-tuning process and minimize the carbon footprint. ## Ethical Considerations tau-instruct-0.5B-DPOP has the potential to be used in a wide range of applications, some of which may have ethical implications. Users should ensure that the model is used responsibly and does not cause harm or discriminate against individuals or groups. As with any AI system, it is crucial to consider the potential biases and limitations of the model when deploying it in real-world applications. ## Usage Rights Make sure to read Qwen's license before using this model. The fine-tuned model, tau-instruct-0.5B-DPOP, is subject to the same usage rights as its base model, tau-0.5B. ## Evaluation Coming soon.
Q-bert/Bumblebee-7B
Q-bert
"2023-12-03T17:27:48Z"
1,409
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "Math", "conversational", "en", "dataset:meta-math/MetaMathQA", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-03T17:19:36Z"
--- license: apache-2.0 datasets: - meta-math/MetaMathQA language: - en pipeline_tag: text-generation tags: - Math --- ## Bumblebee-7B <img src="https://images6.alphacoders.com/131/1314913.jpeg" width="300" height="200" alt="Bumblebee-7B"> Fine-tuned On [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA) You can use ChatML format. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [Coming soon]() | Metric | Value | |-----------------------|---------------------------| | Avg. | Coming soon | | ARC (25-shot) | Coming soon | | HellaSwag (10-shot) | Coming soon | | MMLU (5-shot) | Coming soon | | TruthfulQA (0-shot) | Coming soon | | Winogrande (5-shot) | Coming soon | | GSM8K (5-shot) | Coming soon |
anakin87/Llama-3-8b-ita-ties-pro
anakin87
"2024-05-24T21:16:34Z"
1,409
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "it", "arxiv:2306.01708", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-18T14:48:08Z"
--- base_model: - meta-llama/Meta-Llama-3-8B-Instruct - DeepMount00/Llama-3-8b-Ita - swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA library_name: transformers tags: - mergekit - merge license: llama3 language: - it --- # Llama-3-8b-ita-ties-pro This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). I tried to merge two of the best Italian LLMs using Mergekit. The results are acceptable, but I could not improve on the best existing model. ## Evaluation For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). Here's a breakdown of the performance metrics: | Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------------|:----------------------|:----------------|:---------------------|:--------| | **Accuracy Normalized** | 0.6967 | 0.5646 | 0.5717 | 0.6110 | ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) * [swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA](https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: meta-llama/Meta-Llama-3-8B-Instruct # no parameters necessary for base model - model: swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA parameters: density: 0.7 weight: 0.6 - model: DeepMount00/Llama-3-8b-Ita parameters: density: 0.7 weight: 0.3 merge_method: ties base_model: meta-llama/Meta-Llama-3-8B-Instruct parameters: normalize: true dtype: bfloat16 ```
bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF
bartowski
"2024-06-18T18:40:18Z"
1,409
4
null
[ "gguf", "text-generation", "license:other", "region:us" ]
text-generation
"2024-06-17T19:01:04Z"
--- license: other license_name: deepseek-license license_link: LICENSE quantized_by: bartowski pipeline_tag: text-generation --- ## Llamacpp imatrix Quantizations of DeepSeek-Coder-V2-Lite-Base Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3166">b3166</a> for quantization. Original model: https://huggingface.co/deepseek-ai/DeepSeek-Coder-V2-Lite-Base All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) ## Prompt format ``` <๏ฝœbeginโ–ofโ–sentence๏ฝœ>{system_prompt} User: {prompt} Assistant: <๏ฝœendโ–ofโ–sentence๏ฝœ>Assistant: ``` ## Download a file (not the whole branch) from below: | Filename | Quant type | File Size | Description | | -------- | ---------- | --------- | ----------- | | [DeepSeek-Coder-V2-Lite-Base-Q8_0_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q8_0_L.gguf) | Q8_0_L | 17.09GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Extremely high quality, generally unneeded but max available quant. | | [DeepSeek-Coder-V2-Lite-Base-Q8_0.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q8_0.gguf) | Q8_0 | 16.70GB | Extremely high quality, generally unneeded but max available quant. | | [DeepSeek-Coder-V2-Lite-Base-Q6_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q6_K_L.gguf) | Q6_K_L | 14.56GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Very high quality, near perfect, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q6_K.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q6_K.gguf) | Q6_K | 14.06GB | Very high quality, near perfect, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q5_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q5_K_L.gguf) | Q5_K_L | 12.37GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. High quality, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q5_K_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q5_K_M.gguf) | Q5_K_M | 11.85GB | High quality, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q5_K_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q5_K_S.gguf) | Q5_K_S | 11.14GB | High quality, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q4_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q4_K_L.gguf) | Q4_K_L | 10.91GB | *Experimental*, uses f16 for embed and output weights. Please provide any feedback of differences. Good quality, uses about 4.83 bits per weight, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q4_K_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q4_K_M.gguf) | Q4_K_M | 10.36GB | Good quality, uses about 4.83 bits per weight, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q4_K_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q4_K_S.gguf) | Q4_K_S | 9.53GB | Slightly lower quality with more space savings, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-IQ4_XS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ4_XS.gguf) | IQ4_XS | 8.57GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. | | [DeepSeek-Coder-V2-Lite-Base-Q3_K_L.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q3_K_L.gguf) | Q3_K_L | 8.45GB | Lower quality but usable, good for low RAM availability. | | [DeepSeek-Coder-V2-Lite-Base-Q3_K_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q3_K_M.gguf) | Q3_K_M | 8.12GB | Even lower quality. | | [DeepSeek-Coder-V2-Lite-Base-IQ3_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ3_M.gguf) | IQ3_M | 7.55GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | [DeepSeek-Coder-V2-Lite-Base-Q3_K_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q3_K_S.gguf) | Q3_K_S | 7.48GB | Low quality, not recommended. | | [DeepSeek-Coder-V2-Lite-Base-IQ3_XS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ3_XS.gguf) | IQ3_XS | 7.12GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | [DeepSeek-Coder-V2-Lite-Base-IQ3_XXS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ3_XXS.gguf) | IQ3_XXS | 6.96GB | Lower quality, new method with decent performance, comparable to Q3 quants. | | [DeepSeek-Coder-V2-Lite-Base-Q2_K.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-Q2_K.gguf) | Q2_K | 6.43GB | Very low quality but surprisingly usable. | | [DeepSeek-Coder-V2-Lite-Base-IQ2_M.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ2_M.gguf) | IQ2_M | 6.32GB | Very low quality, uses SOTA techniques to also be surprisingly usable. | | [DeepSeek-Coder-V2-Lite-Base-IQ2_S.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ2_S.gguf) | IQ2_S | 6.00GB | Very low quality, uses SOTA techniques to be usable. | | [DeepSeek-Coder-V2-Lite-Base-IQ2_XS.gguf](https://huggingface.co/bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF/blob/main/DeepSeek-Coder-V2-Lite-Base-IQ2_XS.gguf) | IQ2_XS | 5.96GB | Very low quality, uses SOTA techniques to be usable. | ## Downloading using huggingface-cli First, make sure you have hugginface-cli installed: ``` pip install -U "huggingface_hub[cli]" ``` Then, you can target the specific file you want: ``` huggingface-cli download bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF --include "DeepSeek-Coder-V2-Lite-Base-Q4_K_M.gguf" --local-dir ./ ``` If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: ``` huggingface-cli download bartowski/DeepSeek-Coder-V2-Lite-Base-GGUF --include "DeepSeek-Coder-V2-Lite-Base-Q8_0.gguf/*" --local-dir DeepSeek-Coder-V2-Lite-Base-Q8_0 ``` You can either specify a new local-dir (DeepSeek-Coder-V2-Lite-Base-Q8_0) or download them all in place (./) ## Which file should I choose? A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have. If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM. If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total. Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'. If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M. If you want to get more into the weeds, you can check out this extremely useful feature chart: [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix) But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size. These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide. The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm. Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
AI-Sweden-Models/gpt-sw3-20b-instruct
AI-Sweden-Models
"2024-06-12T16:09:57Z"
1,408
12
transformers
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "conversational", "da", "sv", "en", "no", "is", "dataset:laion/OIG", "dataset:databricks/databricks-dolly-15k", "dataset:OpenAssistant/oasst1", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-04-28T13:29:50Z"
--- license: other language: - da - sv - en - 'no' - is pipeline_tag: text-generation datasets: - laion/OIG - databricks/databricks-dolly-15k - OpenAssistant/oasst1 --- # Model description [AI Sweden](https://huggingface.co/AI-Sweden-Models/) **Base models** [GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/) [GPT-Sw3 6.7B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b/) | [GPT-Sw3 6.7B v2](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2/) | [GPT-Sw3 20B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b/) [GPT-Sw3 40B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-40b/) **Instruct models** [GPT-Sw3 126M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m-instruct/) | [GPT-Sw3 356M Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m-instruct/) | [GPT-Sw3 1.3B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b-instruct/) [GPT-Sw3 6.7B v2 Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct/) | [GPT-Sw3 20B Instruct](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/) **Quantized models** [GPT-Sw3 6.7B v2 Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-6.7b-v2-instruct-4bit-gptq) | [GPT-Sw3 20B Instruct 4-bit gptq](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct-4bit-gptq) GPT-SW3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-SW3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. The `instruct` models were finetrained on instruction data using both chat and raw text formats. # Intended use GPT-SW3 is an autoregressive large language model that is capable of generating coherent text in 5 different languages, and 4 programming languages. GPT-SW3 can also be instructed to perform text tasks that it has not been explicitly trained for, by casting them as text generation tasks. # Limitations Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of for example bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: overrepresent some viewpoints and underrepresent others, contain stereotypes, generate hateful, abusive, violent, discriminatory or prejudicial language. The model may make errors, including producing incorrect information as if it were factual, it may generate irrelevant or repetitive outputs, and content that may not be appropriate for all settings, including sexual content. # How to use To be able to access the model from Python, since this is a private repository, you have to log in with your access token. This can be done with `huggingface-cli login`, see [HuggingFace Quick Start Guide](https://huggingface.co/docs/huggingface_hub/quick-start#login) for more information. The following code snippet loads our tokenizer & model, and uses the GPU if available. ```python import torch from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM # Initialize Variables model_name = "AI-Sweden-Models/gpt-sw3-20b-instruct" device = "cuda:0" if torch.cuda.is_available() else "cpu" prompt = "Trรคd รคr fina fรถr att" # Initialize Tokenizer & Model tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) model.eval() model.to(device) ``` Generating text using the `generate` method is done as follows: ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] generated_text = tokenizer.decode(generated_token_ids) ``` The chat format used during data-preprocessing takes the form: ``` <|endoftext|><s> User: Jag tycker trรคd รคr fina <s> Bot: Kul att du tycker det! <s> ... ``` The procedure to generate text is the same as before: ```python prompt = """ <|endoftext|><s> User: Varfรถr รคr trรคd fina? <s> Bot: """.strip() input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] generated_text = tokenizer.decode(generated_token_ids) ``` Generating text using the `generate` method is done as follows: ```python input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(device) generated_token_ids = model.generate( inputs=input_ids, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1, )[0] A convenient alternative to the `generate` method is the HuggingFace pipeline, which handles most of the work for you: ```python generator = pipeline('text-generation', tokenizer=tokenizer, model=model, device=device) generated = generator(prompt, max_new_tokens=100, do_sample=True, temperature=0.6, top_p=1)[0]["generated_text"] ``` # Compliance The release of GPT-SW3 consists of model weights, a configuration file, a tokenizer file and a vocabulary file. None of these files contain any personally identifiable information (PII) or any copyrighted material. # GPT-SW3 Model Card Following Mitchell et al. (2018), we provide a model card for GPT-SW3. # Model Details - Person or organization developing model: GPT-SW3 was developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. - Model date: GPT-SW3 date of release 2022-12-20 - Model version: This is the second generation of GPT-SW3. - Model type: GPT-SW3 is a large decoder-only transformer language model. - Information about training algorithms, parameters, fairness constraints or other applied approaches, and features: GPT-SW3 was trained with the NeMo Megatron GPT implementation. - Paper or other resource for more information: N/A. - License: [LICENSE](https://huggingface.co/AI-Sweden-Models/gpt-sw3-20b-instruct/blob/main/LICENSE). - Where to send questions or comments about the model: [email protected] # Intended Use - Primary intended uses: We pre-release GPT-SW3 for research and evaluation of the capabilities of Large Language Models for the Nordic languages. This is an important step in the process of knowledge building for LLMs, validating the model and collecting feedback on both what works well and what does not. - Primary intended users: Organizations and individuals in the Nordic NLP ecosystem who can contribute to the validation and testing of the models and provide feedback to the community. - Out-of-scope use cases: See the modified RAIL license. # Data, Limitations, and Recommendations - Data selection for training: Training data for GPT-SW3 was selected based on a combination of breadth and availability. See our Datasheet for more detailed information on the data used to train our model. - Data selection for evaluation: N/A - Limitations: Like other large language models for which the diversity (or lack thereof) of training data induces downstream impact on the quality of our model, GPT-SW3 has limitations in terms of bias and safety. GPT-SW3 can also have quality issues in terms of generation diversity and hallucination. In general, GPT-SW3 is not immune from the plethora of issues that plague modern large language models. By releasing with the modified RAIL license, we also hope to increase communication, transparency, and the study of large language models. The model may: Overrepresent some viewpoints and underrepresent others. Contain stereotypes. Generate: Hateful, abusive, or violent language. Discriminatory or prejudicial language. Content that may not be appropriate for all settings, including sexual content. Make errors, including producing incorrect information as if it were factual. Generate irrelevant or repetitive outputs. - Recommendations for future work: Indirect users should be made aware when the content they're working with is created by the LLM. Users should be aware of Risks and Limitations, and include an appropriate age disclaimer or blocking interface as necessary. Models pretrained with the LLM should include an updated Model Card. Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. - We hope that the release of GPT-SW3, as well as information around our model training process, will increase open science around both large language models in specific and natural language processing and deep learning in general. # GPT-SW3 Datasheet - We follow the recommendations of Gebru et al. (2021) and provide a datasheet for the dataset used to train GPT-SW3. # Motivation - For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description. Pre-training of Large Language Models (LLM), such as GPT-3 (T. B. Brown et al., 2020), Gopher (J. W. Rae et al., 2022), BLOOM (T. L. Scao et al., 2022), etc. require 100s or even 1000s GBs of text data, with recent studies (Chinchilla: J. Hoffmann et al., 2022) suggesting that the scale of the training data is even more important than previously imagined. Therefore, in order to train Swedish LLMs, we needed a large scale Swedish dataset of high quality. Since no such datasets existed before this initiative, we collected data in the Nordic and English languages. - Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? The Strategic Initiative Natural Language Understanding at AI Sweden has established a new research environment in which collaboration is key. The core team working on the creation of the dataset is the NLU research group at AI Sweden. This group consists of researchers and developers from AI Sweden (Lindholmen Science Park AB) and RISE. - Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number. The Swedish Innovation Agency (Vinnova) has funded this work across several different grants, including 2019-02996 and 2022-00949. - Any other comments? No. # Composition - What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description. The instances are textual documents categorized by language and document type. The dataset is a filtered and deduplicated collection that includes the following sources: - Books - Litteraturbanken (https://litteraturbanken.se/) - The Pile - Articles - Diva (https://www.diva-portal.org/) - The Pile: PubMed - The Pile: ArXiv - Code - Code Parrot: Github code (https://huggingface.co/datasets/codeparrot/github-code) - Conversational - Familjeliv (https://www.familjeliv.se/) - Flashback (https://flashback.se/) - Datasets collected through Parlai (see Appendix in data paper for complete list) (https://github.com/facebookresearch/ParlAI) - Pushshift.io Reddit dataset, developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - Math - English Math dataset generated with code from DeepMind (D. Saxton et al., 2019) - Swedish Math dataset, generated as above with manually translated templates - Miscellaneous - Summarization data (https://www.ida.liu.se/~arnjo82/papers/clarin-21-julius.pdf) - OPUS, the open parallel corpus (https://opus.nlpl.eu/) - Movie scripts (https://github.com/Aveek-Saha/Movie-Script-Database) - Natural Instructions (https://github.com/allenai/natural-instructions) - P3 (Public Pool of Prompts), (https://huggingface.co/datasets/bigscience/P3) - The Norwegian Colossal Corpus (https://huggingface.co/datasets/NbAiLab/NCC) - Danish Gigaword (https://gigaword.dk/) - Icelandic Gigaword (https://clarin.is/en/resources/gigaword/) - The Pile: Stack Exchange - Web Common Crawl - Web data from the project LES (Linguistic Explorations of Societies, https://les.gu.se). - Multilingual C4 (MC4), prepared by AllenAI from C4 (C. Raffel et al., 2019) - Open Super-large Crawled Aggregated coRpus (OSCAR) (P. O. Suarez, 2019) - The Pile: Open Web Text - Web Sources - Various public Swedish website scrapes (see Appendix in data paper) - Familjeliv Articles - Public Swedish Job Ads from JobTech/Arbetsfรถrmedlingen - Wikipedia - Official Wikipedia dumps - **Instruction data**: - [dolly](https://github.com/databrickslabs/dolly/tree/master/data) - [Open Assistant](https://github.com/LAION-AI/Open-Assistant/blob/main/docs/docs/data/datasets.md) - [OIG](https://laion.ai/blog/oig-dataset/) - Fass: Swedish pharmaceutical information, which was transformed into Q&A format. - How many instances are there in total (of each type, if appropriate)? The training data consists of 1.1TB UTF-8 encoded text, containing 660M documents with a total of 320B tokens. - Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable). The subset of our dataset that comes from multilingual Common Crawl datasets (MC4, Oscar), are filtered by language to only include Swedish, Norwegian, Danish, and Icelandic. From The Pile, we included only the parts that typically are of highest textual quality or complemented the rest of our dataset with sources we otherwise lacked (e.g. books). The remainder of the dataset was collected from the above sources. - What data does each instance consist of? โ€œRawโ€ data (e.g., unprocessed text or images) or features? In either case, please provide a description. Each instance consists of raw text data. - Is there a label or target associated with each instance? If so, please provide a description. No. - Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text. No. - Are relationships between individual instances made explicit (e.g., usersโ€™ movie ratings, social network links)? If so, please describe how these relationships are made explicit. There are no explicit relationships between individual instances. - Are there recommended data splits (e.g., training, development/validation, testing)? If so, please provide a description of these splits, explaining the rationale behind them. There are no explicit splits recommended for this dataset. When pre-training the model, a random split for train, dev, test is set to 99.99%, 0.08%, 0.02% respectively, and is sampled proportionally to each subsetโ€™s weight and size. The weight of each subset was manually decided beforehand. These decisions were made considering the dataโ€™s value, source, and language, to form a representative and balanced pre-training corpus. - Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. The dataset is a collection of many sources, some of which naturally contain some overlap. Although we have performed deduplication, some overlap may still remain. Furthermore, there may be some noise remaining from artifacts originating in Common Crawl datasets, that have been missed by our data filtering process. Except for these, we are not aware of any errors, sources of noise, or redundancies. - Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. - Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so, please describe why. The dataset contains subsets of public Common Crawl, Reddit, Familjeliv and Flashback. These could contain sentences that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety. - Does the dataset relate to people? If not, you may skip the remaining questions in this section. Some documents of this data relate to people, such as news articles, Wikipedia descriptions, etc. - Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. No, the dataset does not explicitly include subpopulation identification. - Any other comments? No. # Collection Process - How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how. N/A. The dataset is a union of publicly available datasets and sources. - What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)? How were these mechanisms or procedures validated? The data was downloaded from the internet. - If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? Please see previous answers for how parts of the dataset were selected. - Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? This data is mined, filtered and sampled by machines. - Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The dataset was collected during the period June 2021 to June 2022. The creation of the collected sources varies, with e.g. Common Crawl data that have been continuously collected over 12 years. - Does the dataset relate to people? If not, you may skip the remainder of the questions in this section. Yes. The texts have been produced by people. Any personal information potentially present in publicly available data sources and thus in the created dataset is of no interest to the collection and use of the dataset. - Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation. Yes. - Any other comments? No. - Preprocessing/cleaning/labeling - Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remainder of the questions in this section. The dataset was filtered and re-formatted on a document-level using standard procedures, inspired by the work in The BigScience ROOTS Corpus (H. Laurenรงon et al., 2022) and Gopher (J. W. Rae et al., 2022). This was done with the goal of achieving a consistent text format throughout the dataset, and to remove documents that did not meet our textual quality requirements (e.g. repetitiveness). Furthermore, the dataset was deduplicated to remedy the overlap between collected subsets using the MinHash algorithm, similar to the method used in GPT-3 and The Pile, and described in greater detail in โ€œDeduplicating Training Data Makes Language Models Betterโ€ (K. Lee et al., 2021). **Instruction data**: The processing outlined above was not applied to the instruction data. Instruction data was turned into chat-turn format and formatted accordingly with an end-of-turn token, as well as unrolled into raw textual form. The Open Assistant data was also automatically translated using GPT-SW3 into Swedish, Danish, Norwegian, and Icelandic. - Was the โ€œrawโ€ data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the โ€œrawโ€ data. The โ€œrawโ€ component datasets are publicly available in their respective locations. - Any other comments? No. # Uses - Has the dataset been used for any tasks already? If so, please provide a description. The dataset was used to pre-train the GPT-SW3 models. - Is there a repository that links to any or all papers or systems that use the dataset? If so, please provide a link or other access point. N/A. - What (other) tasks could the dataset be used for? The data can be used to pre-train language models, which are foundations for many current and future language tasks. - Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a future user might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please provide a description. Is there anything a future user could do to mitigate these undesirable harms? The dataset is probably quite representative of Swedish internet discourse in general, and of the Swedish public sector, but we know that this data does not necessarily reflect the entire Swedish population. - Are there tasks for which the dataset should not be used? If so, please provide a description. None that we are currently aware of. - Any other comments? No. # Distribution - Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. No. - How will the dataset distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? N/A. - When will the dataset be distributed? N/A. - Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions. N/A. - Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. N/A. - Any other comments? No. # Maintenance - Who is supporting/hosting/maintaining the dataset? AI Sweden at Lindholmen Science Park AB. - How can the owner/curator/manager of the dataset be contacted (e.g., email address)? [email protected] - Is there an erratum? If so, please provide a link or other access point. N/A. - Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)? Currently, there are no plans for updating the dataset. - If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. Read the privacy policy for the NLU initiative at AI Sweden [here](https://www.ai.se/en/privacy-policy-nlu). - Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to users. N/A. - If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/ verified? If so, please describe how. If not, why not? Is there a process for communicating/ distributing these contributions to other users? If so, please provide a description. Not at this time. - Any other comments? No. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_AI-Sweden-Models__gpt-sw3-20b-instruct) | Metric | Value | |-----------------------|---------------------------| | Avg. | 38.19 | | ARC (25-shot) | 43.17 | | HellaSwag (10-shot) | 71.09 | | MMLU (5-shot) | 31.32 | | TruthfulQA (0-shot) | 41.02 | | Winogrande (5-shot) | 66.77 | | GSM8K (5-shot) | 8.79 | | DROP (3-shot) | 5.15 |
digiplay/CamelliaMix_NSFW_diffusers_v1.1
digiplay
"2024-05-17T12:46:04Z"
1,408
18
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-05-31T09:19:25Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/44315/camelliamixnsfw This model name is *CamelliaMix_NSFW*, but I think it can generate many elegant look, my sample images: (You can apply VAE or not,different feel) ![ๅคฉไฝฟๆ€่€ƒ - 2023-06-01T014553.253.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/QDTU53CjbK01UlwLKKwWt.png) ![ไธ‹่ฝฝ - 2023-06-01T014142.465.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/zN8MTWSmJHvpxBdBk5doI.png) ![ไธ‹่ฝฝ - 2023-06-01T014036.021.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/5GT10rhJaCJZcAOVIGoFm.png) No VAE sample: ![ไธ‹่ผ‰ - 2023-06-01T025802.542.png](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/OMUrw0RSrnqHBgGfQYqwV.png)
TheBloke/leo-hessianai-13B-chat-bilingual-GGUF
TheBloke
"2023-09-28T11:11:33Z"
1,408
6
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "de", "dataset:LeoLM/OpenSchnabeltier", "dataset:OpenAssistant/OASST-DE", "dataset:FreedomIntelligence/alpaca-gpt4-deutsch", "dataset:FreedomIntelligence/evol-instruct-deutsch", "dataset:LeoLM/German_Poems", "dataset:LeoLM/German_Songs", "dataset:garage-bAInd/Open-Platypus", "dataset:WizardLM/WizardLM_evol_instruct_70k", "dataset:bjoernp/oasst25-08-23-filtered", "base_model:LeoLM/leo-hessianai-13b-chat-bilingual", "license:llama2", "text-generation-inference", "region:us" ]
text-generation
"2023-09-28T10:56:39Z"
--- base_model: LeoLM/leo-hessianai-13b-chat-bilingual datasets: - LeoLM/OpenSchnabeltier - OpenAssistant/OASST-DE - FreedomIntelligence/alpaca-gpt4-deutsch - FreedomIntelligence/evol-instruct-deutsch - LeoLM/German_Poems - LeoLM/German_Songs - garage-bAInd/Open-Platypus - WizardLM/WizardLM_evol_instruct_70k - bjoernp/oasst25-08-23-filtered inference: false language: - en - de library_name: transformers license: llama2 model_creator: LAION LeoLM model_name: Leo Hessianai 13B Chat Bilingual model_type: llama pipeline_tag: text-generation prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Leo Hessianai 13B Chat Bilingual - GGUF - Model creator: [LAION LeoLM](https://huggingface.co/LeoLM) - Original model: [Leo Hessianai 13B Chat Bilingual](https://huggingface.co/LeoLM/leo-hessianai-13b-chat-bilingual) <!-- description start --> ## Description This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 13B Chat Bilingual](https://huggingface.co/LeoLM/leo-hessianai-13b-chat-bilingual). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF) * [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-13b-chat-bilingual) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [leo-hessianai-13b-chat-bilingual.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [leo-hessianai-13b-chat-bilingual.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [leo-hessianai-13b-chat-bilingual.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [leo-hessianai-13b-chat-bilingual.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [leo-hessianai-13b-chat-bilingual.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [leo-hessianai-13b-chat-bilingual.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q4_K_S.gguf) | Q4_K_S | 4 | 7.42 GB| 9.92 GB | small, greater quality loss | | [leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [leo-hessianai-13b-chat-bilingual.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [leo-hessianai-13b-chat-bilingual.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [leo-hessianai-13b-chat-bilingual.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [leo-hessianai-13b-chat-bilingual.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [leo-hessianai-13b-chat-bilingual.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-chat-bilingual-GGUF/blob/main/leo-hessianai-13b-chat-bilingual.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-13B-chat-bilingual-GGUF and below it, a specific filename to download, such as: leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/leo-hessianai-13B-chat-bilingual-GGUF leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/leo-hessianai-13B-chat-bilingual-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-13B-chat-bilingual-GGUF leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-13B-chat-bilingual-GGUF", model_file="leo-hessianai-13b-chat-bilingual.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: LAION LeoLM's Leo Hessianai 13B Chat Bilingual # LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2. Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text. Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length, [`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! ๐Ÿ‘€). With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption. Read our [blog post]() or our paper (preprint coming soon) for more details! *A project by Bjรถrn Plรผster and Christoph Schuhmann in collaboration with LAION and HessianAI.* ## LeoLM Chat `LeoLM/leo-hessianai-13b-chat-bilingual` is a bilingual English-German chat model built on our foundation model `LeoLM/leo-hessianai-13b` and finetuned on a selection of German translateed instruction datasets and their English counterparts. The model performs exceptionally well on writing, explanation and discussion tasks but struggles somewhat with math and advanced reasoning. See our MT-Bench scores: ``` { "first_turn": 6.13125, "second_turn": 4.88125, "categories": { "writing": 6.75, "roleplay": 5.55, "reasoning": 3.3, "math": 2.25, "coding": 3.9, "extraction": 5.8, "stem": 7.55, "humanities": 8.95 }, "average": 5.50625 } ``` ## Model Details - **Finetuned from:** [LeoLM/leo-hessianai-13b](https://huggingface.co/LeoLM/leo-hessianai-13b) - **Model type:** Causal decoder-only transformer language model - **Language:** English and German - **Demo:** [Web Demo]() - **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) - **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Bjรถrn Plรผster](mailto:[email protected]) ## Use in ๐Ÿค—Transformers First install direct dependencies: ``` pip install transformers torch sentencepiece ``` If you want faster inference using flash-attention2, you need to install these dependencies: ```bash pip install packaging ninja pip install flash-attn==v2.1.1 --no-build-isolation pip install git+https://github.com/HazyResearch/[email protected]#subdirectory=csrc/rotary ``` Then load the model in transformers: ```python from transformers import pipeline import torch system_prompt = """<|im_start|>system Dies ist eine Unterhaltung zwischen einem intelligenten, hilfsbereitem KI-Assistenten und einem Nutzer. Der Assistent gibt ausfรผhrliche, hilfreiche und ehrliche Antworten.<|im_end|> """ prompt_format = "<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n" prompt = "Erklรคre mir wie die Fahrradwegesituation in Hamburg ist." generator = pipeline(model="LeoLM/leo-hessianai-13b-chat-bilingual", device="cuda", torch_dtype=torch.float16, trust_remote_code=True) # True for flash-attn2 else False print(generator(prompt_format.format(prompt=prompt), do_sample=True, top_p=0.95, max_length=8192)) ``` "*Als KI kann ich keine persรถnlichen Beobachtungen teilen, aber ich kann einige allgemeine Informationen zur Fahrradwegesituation in Hamburg liefern. Im Vergleich zu vielen anderen groรŸen Stรคdten hat Hamburg eine hohe Anzahl von Fahrradfahrern und nimmt seine Verantwortung fรผr nachhaltige Verkehrsmittel sehr ernst. Es gibt viele Fahrradwege und separate Fahrspuren, die Radfahrern ermรถglichen, zusammen mit dem Autoverkehr zu fahren. Diese Fahrradspuren sind oft mit Markierungen gekennzeichnet und durch physische Trennungen von anderen Fahrspuren abgegrenzt. Darรผber hinaus gibt es viele FahrradstraรŸen, auf denen Radfahrer Vorfahrt haben und Autos langsamer fahren mรผssen.* *In einigen stรคdtischen Gebieten kรถnnen Fahrradwege jedoch eng oder รผberfรผllt sein, besonders wรคhrend der StoรŸzeiten. Es gibt auch viele Kreuzungen, an denen Radfahrer anhalten und auf Grรผn warten mรผssen, รคhnlich wie Autofahrer. Insgesamt ist die Fahrradinfrastruktur in Hamburg ziemlich gut, aber wie รผberall gibt es immer Raum fรผr Verbesserungen.*" ## Prompting / Prompt Template Prompt dialogue template (ChatML format): ``` """ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant """ ``` The model input can contain multiple conversation turns between user and assistant, e.g. ``` <|im_start|>user {prompt 1}<|im_end|> <|im_start|>assistant {reply 1}<|im_end|> <|im_start|>user {prompt 2}<|im_end|> <|im_start|>assistant (...) ``` ## Ethical Considerations and Limitations LeoLM has been tested in English and German, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, the potential outputs of `LeoLM/leo-hessianai-7b-chat` cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of `LeoLM/leo-hessianai-7b-chat`, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/). ## Finetuning Details | Hyperparameter | Value | |---|---| | Num epochs | 3 | | Examples per epoch | 233275 | | Global batch size | 256 | | Learning rate | 3e-5 | | Warmup steps | 100 | | LR scheduler | Cosine | | Adam betas | (0.9, 0.95) | | Weight decay | 0.001 | ## Dataset Details ``` ## Stats for 'Subset of LeoLM/OpenSchnabeltier' (21314 samples (100.0%)) ----------------- Accepted: 21314/21314 (100.0%) Accepted tokens: 8134690 Skipped: 0 (0.0%) Min tokens per sample: 25 Max tokens per sample: 1202 Avg tokens per sample: 381.65947264708643 ----------------- ## Stats for 'Subset of garage-bAInd/Open-Platypus' (24427 samples (100.0%)) ----------------- Accepted: 24427/24427 (100.0%) Accepted tokens: 9549043 Skipped: 0 (0.0%) Min tokens per sample: 23 Max tokens per sample: 5054 Avg tokens per sample: 390.9216440823679 ----------------- ## Stats for 'Subset of WizardLM/WizardLM_evol_instruct_70k' (68600 samples (100.0%)) ----------------- Accepted: 68600/68600 (100.0%) Accepted tokens: 33045040 Skipped: 0 (0.0%) Min tokens per sample: 18 Max tokens per sample: 11810 Avg tokens per sample: 481.7061224489796 ----------------- ## Stats for 'Subset of FreedomIntelligence/evol-instruct-deutsch' (57841 samples (100.0%)) ----------------- Accepted: 57841/57841 (100.0%) Accepted tokens: 42958192 Skipped: 0 (0.0%) Min tokens per sample: 33 Max tokens per sample: 5507 Avg tokens per sample: 742.6944900675991 ----------------- ## Stats for 'Subset of FreedomIntelligence/alpaca-gpt4-deutsch' (48969 samples (100.0%)) ----------------- Accepted: 48969/48969 (100.0%) Accepted tokens: 13372005 Skipped: 0 (0.0%) Min tokens per sample: 19 Max tokens per sample: 1359 Avg tokens per sample: 273.07082031489307 ----------------- ## Stats for 'Subset of LeoLM/German_Songs' (490 samples (100.0%)) ----------------- Accepted: 490/490 (100.0%) Accepted tokens: 618642 Skipped: 0 (0.0%) Min tokens per sample: 747 Max tokens per sample: 1678 Avg tokens per sample: 1262.534693877551 ----------------- ## Stats for 'Subset of LeoLM/German_Poems' (392 samples (100.0%)) ----------------- Accepted: 392/392 (100.0%) Accepted tokens: 187897 Skipped: 0 (0.0%) Min tokens per sample: 231 Max tokens per sample: 826 Avg tokens per sample: 479.3290816326531 ----------------- ## Stats for 'Subset of OpenAssistant/OASST_DE' (3646 samples (100.0%)) ----------------- Accepted: 3646/3646 (100.0%) Accepted tokens: 2338738 Skipped: 0 (0.0%) Min tokens per sample: 29 Max tokens per sample: 2484 Avg tokens per sample: 641.4530992868897 ----------------- ## Stats for 'Subset of bjoernp/oasst25-08-23-filtered' (8922 samples (100.0%)) ----------------- Accepted: 8922/8922 (100.0%) Accepted tokens: 4526427 Skipped: 0 (0.0%) Min tokens per sample: 23 Max tokens per sample: 5407 Avg tokens per sample: 507.3332212508406 ----------------- ## Stats for 'total' (235632 samples (100.0%)) ----------------- Accepted: 235632/235632 (100.0%) Accepted tokens: 115862397 Skipped: 0 (0.0%) Min tokens per sample: 18 Max tokens per sample: 11810 Avg tokens per sample: 491.70909299246284 ----------------- ``` <!-- original-model-card end -->
senseable/33x-coder
senseable
"2024-01-08T01:20:38Z"
1,408
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "Code Generation", "en", "dataset:andersonbcdefg/synthetic_retrieval_tasks", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-04T22:53:25Z"
--- language: - "en" metrics: - code_eval library_name: transformers tags: - Code Generation datasets: - andersonbcdefg/synthetic_retrieval_tasks - ise-uiuc/Magicoder-Evol-Instruct-110K license: "apache-2.0" --- # 33x Coding Model 33x-coder is a powerful Llama based model available on Hugging Face, designed to assist and augment coding tasks. Leveraging the capabilities of advanced language models, 33x-coder specializes in understanding and generating code. This model is trained on a diverse range of programming languages and coding scenarios, making it a versatile tool for developers looking to streamline their coding process. Whether you're debugging, seeking coding advice, or generating entire scripts, 33x-coder can provide relevant, syntactically correct code snippets and comprehensive programming guidance. Its intuitive understanding of coding languages and constructs makes it an invaluable asset for any coding project, helping to reduce development time and improve code quality. ## Importing necessary libraries from transformers ``` from transformers import AutoTokenizer, AutoModelForCausalLM ``` ## Initialize the tokenizer and model ``` tokenizer = AutoTokenizer.from_pretrained("senseable/33x-coder") model = AutoModelForCausalLM.from_pretrained("senseable/33x-coder").cuda() ``` # User's request for a quick sort algorithm in Python ``` messages = [ {'role': 'user', 'content': "Write a Python function to check if a number is prime."} ] ``` ## Preparing the input for the model by encoding the messages and sending them to the same device as the model ``` inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) ``` ## Generating responses from the model with specific parameters for text generation ``` outputs = model.generate( inputs, max_new_tokens=512, # Maximum number of new tokens to generate do_sample=False, # Disable random sampling to get the most likely next token top_k=50, # The number of highest probability vocabulary tokens to keep for top-k-filtering top_p=0.95, # Nucleus sampling: keeps the top p probability mass worth of tokens num_return_sequences=1, # The number of independently computed returned sequences for each element in the batch eos_token_id=32021, # End of sequence token id add_generation_prompt=True ) ``` ## Decoding and printing the generated response ``` start_index = len(inputs[0]) generated_output_tokens = outputs[0][start_index:] decoded_output = tokenizer.decode(generated_output_tokens, skip_special_tokens=True) print("Generated Code:\n", decoded_output) ``` --- license: apache-2.0 ---
johnpaulbin/beanboxtranslator
johnpaulbin
"2024-06-21T18:32:02Z"
1,408
0
null
[ "gguf", "region:us" ]
null
"2024-06-21T18:29:55Z"
Entry not found
grantprice/Cerebras-GPT-590M-finetuned-DND
grantprice
"2023-06-10T07:07:38Z"
1,407
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-08T20:51:20Z"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: Cerebras-GPT-590M-finetuned-DND results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Cerebras-GPT-590M-finetuned-DND This model is a fine-tuned version of [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0434 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 467 | 1.9262 | | 2.7059 | 2.0 | 934 | 1.3295 | | 1.7414 | 3.0 | 1401 | 1.0434 | ### Framework versions - Transformers 4.31.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
timm/tiny_vit_21m_384.dist_in22k_ft_in1k
timm
"2023-09-01T18:12:57Z"
1,407
2
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2207.10666", "license:apache-2.0", "region:us" ]
image-classification
"2023-09-01T16:05:33Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-22k --- # Model card for tiny_vit_21m_384.dist_in22k_ft_in1k A TinyViT image classification model. Pretrained on ImageNet-22k with distillation and fine-tuned on ImageNet-1k by paper authors. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 21.2 - GMACs: 11.9 - Activations (M): 46.8 - Image size: 384 x 384 - **Papers:** - TinyViT: Fast Pretraining Distillation for Small Vision Transformers: https://arxiv.org/abs/2207.10666 - **Original:** https://github.com/microsoft/Cream/tree/main/TinyViT - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-22k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('tiny_vit_21m_384.dist_in22k_ft_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_21m_384.dist_in22k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 96, 96, 96]) # torch.Size([1, 192, 48, 48]) # torch.Size([1, 384, 24, 24]) # torch.Size([1, 576, 12, 12]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'tiny_vit_21m_384.dist_in22k_ft_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 576, 12, 12) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Citation ```bibtex @InProceedings{tiny_vit, title={TinyViT: Fast Pretraining Distillation for Small Vision Transformers}, author={Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen and Xiao, Bin and Fu, Jianlong and Yuan, Lu}, booktitle={European conference on computer vision (ECCV)}, year={2022} } ```
hungsvdut2k2/raft-medical-vistral-qa
hungsvdut2k2
"2024-06-24T09:42:34Z"
1,407
0
transformers
[ "transformers", "safetensors", "gguf", "mistral", "text-generation-inference", "unsloth", "trl", "en", "base_model:Viet-Mistral/Vistral-7B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2024-06-24T03:50:35Z"
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl base_model: Viet-Mistral/Vistral-7B-Chat --- # Uploaded model - **Developed by:** hungsvdut2k2 - **License:** apache-2.0 - **Finetuned from model :** Viet-Mistral/Vistral-7B-Chat This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
banelingz/bin4ry_stheno_8B_v1_gguf
banelingz
"2024-07-01T00:10:13Z"
1,407
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "base_model:Sao10K/L3-8B-Stheno-v3.3-32K", "base_model:Hastagaras/For-Bin4ry", "endpoints_compatible", "region:us" ]
null
"2024-06-29T00:03:52Z"
--- base_model: - Sao10K/L3-8B-Stheno-v3.3-32K - Hastagaras/For-Bin4ry library_name: transformers tags: - mergekit - merge --- # bin4ry_stheno_8B_v1_gguf These are quantizations of [banelingz/bin4ry_stheno_8B_v1](https://huggingface.co/banelingz/bin4ry_stheno_8B_v1).
dacorvo/tiny-random-MistralForCausalLM
dacorvo
"2024-01-15T16:45:16Z"
1,406
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-15T16:45:16Z"
Entry not found
MAISAAI/gemma-2b-coder
MAISAAI
"2024-05-08T00:22:31Z"
1,406
2
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "generated_from_trainer", "code", "coding", "dataset:HuggingFaceH4/CodeAlpaca_20K", "doi:10.57967/hf/2208", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-02-22T16:59:56Z"
--- tags: - generated_from_trainer - code - coding - gemma model-index: - name: gemma-2b-coder results: [] license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms language: - code thumbnail: https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png datasets: - HuggingFaceH4/CodeAlpaca_20K pipeline_tag: text-generation --- <div style="text-align:center;width:250px;height:250px;"> <img src="https://huggingface.co/mrm8488/gemma-2b-coder/resolve/main/logo.png" alt="gemma coder logo""> </div> # Gemma Coder ๐Ÿ‘ฉโ€๐Ÿ’ป **Gemma 2B** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library. ## Model description ๐Ÿง  [Gemma-2b](https://huggingface.co/google/gemma-2b) Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone. ## Training and evaluation data ๐Ÿ“š [CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model. ### Training hyperparameters โš™ Training took 1h 40 min on Free Colab T4 GPU (16GB VRAM) with the following params: ```py num_train_epochs=2, per_device_train_batch_size=2, per_device_eval_batch_size=1, gradient_accumulation_steps=32 learning_rate=2.5e-5, optim="paged_adamw_8bit", logging_steps=5, seed=66, load_best_model_at_end=True, save_strategy="steps", save_steps=50, evaluation_strategy="steps", eval_steps=50, save_total_limit=2, remove_unused_columns=True, fp16=True, bf16=False ``` ### Training results ๐Ÿ—’๏ธ | Step | Training Loss | Validation Loss | |------|---------------|-----------------| | 50 | 1.467800 | 1.450770 | | 100 | 1.060000 | 1.064840 | | 150 | 0.900200 | 0.922290 | | 200 | 0.848400 | 0.879911 | | 250 | 0.838100 | 0.867354 | ### Eval results ๐Ÿ“Š WIP ### Example of usage ๐Ÿ‘ฉโ€๐Ÿ’ป I recommend install the following version of `torch`: ```sh pip install "torch>=2.1.1" -U ``` ```py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig model_id = "MAISAAI/gemma-2b-coder" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id).to("cuda") def generate( instruction, max_new_tokens=256, temperature=0.1, top_p=0.75, top_k=40, num_beams=2, **kwargs, ): system = f"<bos><|system|>\nYou are a helpful coding assistant.<eos>\n" prompt = f"{system}<|user|>\n{instruction}<eos>\n<|assistant|>\n" inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].to("cuda") attention_mask = inputs["attention_mask"].to("cuda") generation_config = GenerationConfig( temperature=temperature, top_p=top_p, top_k=top_k, num_beams=num_beams, **kwargs, ) with torch.no_grad(): generation_output = model.generate( input_ids=input_ids, attention_mask=attention_mask, generation_config=generation_config, return_dict_in_generate=True, max_new_tokens=max_new_tokens, early_stopping=True ) s = generation_output.sequences[0] output = tokenizer.decode(s, skip_special_tokens=True) return output.split("<|assistant|>")[1] instruction = """ Edit the following XML code to add a navigation bar to the top of a web page <html> <head> <title>Maisa</title> </head> """ print(generate(instruction)) ``` ### Citation ```bibtex @misc {maisa_ai_2024, author = { {MAISA AI} }, title = { gemma-2b-coder (Revision e5e4e5b) }, year = 2024, url = { https://huggingface.co/MAISAAI/gemma-2b-coder }, doi = { 10.57967/hf/2208 }, publisher = { Hugging Face } } ```
darkstorm2150/Protogen_x5.8_Official_Release
darkstorm2150
"2023-03-21T18:20:14Z"
1,405
202
diffusers
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "artistic", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-01-06T01:18:34Z"
--- language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - art - artistic - diffusers inference: true license: creativeml-openrail-m --- <center><img src="https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release/resolve/main/Protogen%20x5.8-512.png" style="height:690px; border-radius: 8%; border: 10px solid #663380; padding-top:0px;" span title="Protogen x5.8 Raw Output"></center> <center><h1>Protogen x5.8 (Scifi-Anime) Official Release</h1></center> <center><p><em>Research Model by <a href="https://instagram.com/officialvictorespinoza">darkstorm2150</a></em></p></center> </div> ## Table of contents * [General info](#general-info) * [Granular Adaptive Learning](#granular-adaptive-learning) * [Trigger Words](#trigger-words) * [Setup](#setup) * [Space](#space) * [CompVis](#compvis) * [Diffusers](#๐Ÿงจ-diffusers) * [Checkpoint Merging Data Reference](#checkpoint-merging-data-reference) * [License](#license) ## General info Protogen x5.8 Protogen was warm-started with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) and is rebuilt using dreamlikePhotoRealV2.ckpt as a core, adding small amounts during merge checkpoints. ## Granular Adaptive Learning Granular adaptive learning is a machine learning technique that focuses on adjusting the learning process at a fine-grained level, rather than making global adjustments to the model. This approach allows the model to adapt to specific patterns or features in the data, rather than making assumptions based on general trends. Granular adaptive learning can be achieved through techniques such as active learning, which allows the model to select the data it wants to learn from, or through the use of reinforcement learning, where the model receives feedback on its performance and adapts based on that feedback. It can also be achieved through techniques such as online learning where the model adjust itself as it receives more data. Granular adaptive learning is often used in situations where the data is highly diverse or non-stationary and where the model needs to adapt quickly to changing patterns. This is often the case in dynamic environments such as robotics, financial markets, and natural language processing. ## Trigger Words modelshoot style, analog style, mdjrny-v4 style, nousr robot Trigger words are available for the hassan1.4 and f222, might have to google them :) ## Setup To run this model, download the model.ckpt or model.safetensor and install it in your "stable-diffusion-webui\models\Stable-diffusion" directory ## Space We support a [Gradio](https://github.com/gradio-app/gradio) Web UI: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/darkstorm2150/Stable-Diffusion-Protogen-webui) ## CompVis ## CKPT [Download ProtoGen x5.8.ckpt (7.7GB)](https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release/resolve/main/ProtoGen_X5.8.ckpt) [Download ProtoGen X5.8-pruned-fp16.ckpt (1.72 GB)](https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release/resolve/main/ProtoGen_X5.8-pruned-fp16.ckpt) ## Safetensors [Download ProtoGen x5.8.safetensors (7.7GB)](https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release/resolve/main/ProtoGen_X5.8.safetensors) [Download ProtoGen x5.8-pruned-fp16.safetensors (1.72GB)](https://huggingface.co/darkstorm2150/Protogen_x5.8_Official_Release/resolve/main/ProtoGen_X5.8-pruned-fp16.safetensors) ### ๐Ÿงจ Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). ```python from diffusers import StableDiffusionPipeline, DPMSolverMultistepScheduler import torch prompt = ( "modelshoot style, (extremely detailed CG unity 8k wallpaper), full shot body photo of the most beautiful artwork in the world, " "english medieval witch, black silk vale, pale skin, black silk robe, black cat, necromancy magic, medieval era, " "photorealistic painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by Jeremy Mann, Greg Manchess, Antonio Moro, trending on ArtStation, " "trending on CGSociety, Intricate, High Detail, Sharp focus, dramatic, photorealistic painting art by midjourney and greg rutkowski" ) model_id = "darkstorm2150/Protogen_v5.8_Official_Release" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") image = pipe(prompt, num_inference_steps=25).images[0] image.save("./result.jpg") ``` ## - PENDING DATA FOR MERGE, RPGv2 not accounted.. ## Checkpoint Merging Data Reference <style> .myTable { border-collapse:collapse; } .myTable th { background-color:#663380; color:white; } .myTable td, .myTable th { padding:5px; border:1px solid #663380; } </style> <table class="myTable"> <tr> <th>Models</th> <th>Protogen v2.2 (Anime)</th> <th>Protogen x3.4 (Photo)</th> <th>Protogen x5.3 (Photo)</th> <th>Protogen x5.8 (Sci-fi/Anime)</th> <th>Protogen x5.9 (Dragon)</th> <th>Protogen x7.4 (Eclipse)</th> <th>Protogen x8.0 (Nova)</th> <th>Protogen x8.6 (Infinity)</th> </tr> <tr> <td>seek_art_mega v1</td> <td>52.50%</td> <td>42.76%</td> <td>42.63%</td> <td></td> <td></td> <td></td> <td>25.21%</td> <td>14.83%</td> </tr> <tr> <td>modelshoot v1</td> <td>30.00%</td> <td>24.44%</td> <td>24.37%</td> <td>2.56%</td> <td>2.05%</td> <td>3.48%</td> <td>22.91%</td> <td>13.48%</td> </tr> <tr> <td>elldreth v1</td> <td>12.64%</td> <td>10.30%</td> <td>10.23%</td> <td></td> <td></td> <td></td> <td>6.06%</td> <td>3.57%</td> </tr> <tr> <td>photoreal v2</td> <td></td> <td></td> <td>10.00%</td> <td>48.64%</td> <td>38.91%</td> <td>66.33%</td> <td>20.49%</td> <td>12.06%</td> </tr> <tr> <td>analogdiffusion v1</td> <td></td> <td>4.75%</td> <td>4.50%</td> <td></td> <td></td> <td></td> <td>1.75%</td> <td>1.03%</td> </tr> <tr> <td>openjourney v2</td> <td></td> <td>4.51%</td> <td>4.28%</td> <td></td> <td></td> <td>4.75%</td> <td>2.26%</td> <td>1.33%</td> </tr> <tr> <td>hassan1.4</td> <td>2.63%</td> <td>2.14%</td> <td>2.13%</td> <td></td> <td></td> <td></td> <td>1.26%</td> <td>0.74%</td> </tr> <tr> <td>f222</td> <td>2.23%</td> <td>1.82%</td> <td>1.81%</td> <td></td> <td></td> <td></td> <td>1.07%</td> <td>0.63%</td> </tr> <tr> <td>hasdx</td> <td></td> <td></td> <td></td> <td>20.00%</td> <td>16.00%</td> <td>4.07%</td> <td>5.01%</td> <td>2.95%</td> </tr> <tr> <td>moistmix</td> <td></td> <td></td> <td></td> <td>16.00%</td> <td>12.80%</td> <td>3.86%</td> <td>4.08%</td> <td>2.40%</td> </tr> <tr> <td>roboDiffusion v1</td> <td></td> <td>4.29%</td> <td></td> <td>12.80%</td> <td>10.24%</td> <td>3.67%</td> <td>4.41%</td> <td>2.60%</td> </tr> <tr> <td>RPG v3</td> <td></td> <td>5.00%</td> <td></td> <td></td> <td>20.00%</td> <td>4.29%</td> <td>4.29%</td> <td>2.52%</td> </tr> <tr> <td>anything&everything</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.51%</td> <td>0.56%</td> <td>0.33%</td> </tr> <tr> <td>dreamlikediff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>5.0%</td> <td>0.63%</td> <td>0.37%</td> </tr> <tr> <td>sci-fidiff v1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.10%</td> </tr> <tr> <td>synthwavepunk v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>3.26%</td> </tr> <tr> <td>mashupv2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>11.51%</td> </tr> <tr> <td>dreamshaper 252</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.04%</td> </tr> <tr> <td>comicdiff v2</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>4.25%</td> </tr> <tr> <td>artEros</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>15.00%</td> </tr> </table> ## License License This model is licesed under a modified CreativeML OpenRAIL-M license. You are not allowed to host, finetune, or do inference with the model or its derivatives on websites/apps/etc. If you want to, please email us at [email protected] You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Photoreal 2.0) and include the license as well as a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0) You are free to use the outputs (images) of the model for commercial purposes in teams of 10 or less You can't use the model to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md
TheBloke/robin-13B-v2-fp16
TheBloke
"2023-09-03T08:38:16Z"
1,405
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-16T18:59:47Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # OptimalScale's Robin 13B v2 fp16 These files are pytorch format fp16 model files for [OptimalScale's Robin 13B v2](https://huggingface.co/OptimalScale/robin-13b-v2-delta). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/robin-13B-v2-GGML) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/robin-13B-v2-fp16) ## Prompt template ``` A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions ###Human: prompt ###Assistant: ``` <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: vamX, K, Jonathan Leane, Lone Striker, Sean Connelly, Chris McCloskey, WelcomeToTheClub, Nikolai Manek, John Detwiler, Kalila, David Flickinger, Fen Risland, subjectnull, Johann-Peter Hartmann, Talal Aujan, John Villwock, senxiiz, Khalefa Al-Ahmad, Kevin Schuppel, Alps Aficionado, Derek Yates, Mano Prime, Nathan LeClaire, biorpg, trip7s trip, Asp the Wyvern, chris gileta, Iucharbius , Artur Olbinski, Ai Maven, Joseph William Delisle, Luke Pendergrass, Illia Dulskyi, Eugene Pentland, Ajan Kanaga, Willem Michiel, Space Cruiser, Pyrater, Preetika Verma, Junyu Yang, Oscar Rangel, Spiking Neurons AB, Pierre Kircher, webtim, Cory Kujawski, terasurfer , Trenton Dambrowitz, Gabriel Puliatti, Imad Khwaja, Luke. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: OptimalScale's Robin 13B v2 No model card provided in source repository.
Rocketknight1/falcon-rw-1b
Rocketknight1
"2023-07-07T16:32:07Z"
1,405
0
transformers
[ "transformers", "pytorch", "falcon", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-28T18:46:26Z"
Entry not found
Salesforce/codet5p-220m-bimodal
Salesforce
"2023-07-19T13:48:31Z"
1,405
3
transformers
[ "transformers", "pytorch", "codet5p_bimodal", "feature-extraction", "custom_code", "arxiv:2305.07922", "license:bsd-3-clause", "region:us" ]
feature-extraction
"2023-07-19T13:39:52Z"
--- license: bsd-3-clause --- # CodeT5+ 220M Bimodal Models ## Model description [CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks. It is introduced in the paper: [CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf) by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution). Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data. Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture. Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca). ## How to use This model can be easily loaded using the `AutoModel` functionality and employs the [CodeT5](https://github.com/salesforce/CodeT5) tokenizer with three special tokens added (`[ENC]`, `[TDEC]`, `[CDEC]`). This checkpoint consists of a CodeT5+ 220M model and a projection layer and an itm_head layer for text-code matching. ```python from transformers import AutoModel, AutoTokenizer checkpoint = "Salesforce/codet5p-220m-bimodal" device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(checkpoint, trust_remote_code=True) model = AutoModel.from_pretrained(checkpoint, trust_remote_code=True).to(device) ``` ## Pretraining data This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code). The data is preprocessed by reserving only permissively licensed code ("mit" โ€œapache-2โ€, โ€œbsd-3-clauseโ€, โ€œbsd-2-clauseโ€, โ€œcc0-1.0โ€, โ€œunlicenseโ€, โ€œiscโ€). Supported languages (9 in total) are as follows: `c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.` ## Training procedure This checkpoint is first trained on the unimodal code data at the first-stage pretraining and then on bimodal text-code pair data using the proposed mixture of pretraining tasks. Please refer to the paper for more details. ## Evaluation results Please refer to the paper and the official GitHub repo for more details. ## BibTeX entry and citation info ```bibtex @article{wang2023codet5plus, title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation}, author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.}, journal={arXiv preprint}, year={2023} } ```
choco9966/Llama-2-7b-instruct-tuning
choco9966
"2023-09-12T04:43:09Z"
1,405
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-12T01:37:52Z"
Entry not found
mncai/llama2-13b-dpo-v3
mncai
"2023-12-14T03:35:30Z"
1,405
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "ko", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-03T09:31:48Z"
--- license: cc-by-nc-sa-4.0 language: - en - ko --- # Model Card for llama2-dpo-v3 ### Introduction of MindsAndCompany https://mnc.ai/ We develop a diverse range of AI models and craft solutions tailored for business applications. In the realm of generative AI, our product development includes the Code Assistant, the TOD Chatbot, and LLMOps. We are also actively working on the development of Enterprise AGI (Artificial General Intelligence). ### Model Summary based beomi/llama-2-koen-13b, instruction tuned and dpo. ### How to Use Here give some examples of how to use our model. ```python from transformers import AutoConfig, AutoModel, AutoTokenizer import transformers import torch hf_model = 'mncai/llama2-13b-dpo-v3' message = "<|user|>\n๋‘ ๊ฐœ์˜ ๊ตฌ๊ฐ€ ์žˆ๋Š”๋ฐ ๊ฐ๊ฐ ์ง€๋ฆ„์ด 1, 2์ผ๋•Œ ๊ตฌ์˜ ๋ถ€ํ”ผ๋Š” ๋ช‡๋ฐฐ ์ฐจ์ด๊ฐ€ ๋‚˜์ง€? ์„ค๋ช…๋„ ๊ฐ™์ด ํ•ด์ค˜.\n<|assistant|>\n" sequences = pipeline( message, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=2048, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ### LICENSE Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License, under LLAMA 2 COMMUNITY LICENSE AGREEMENT ### Contact If you have any questions, please raise an issue or contact us at [email protected]
Yntec/DaintyMix
Yntec
"2024-01-17T13:20:43Z"
1,405
1
diffusers
[ "diffusers", "safetensors", "Anime", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2024-01-17T11:25:01Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image --- # Dainty Mix This model with the MoistMix V2 VAE baked in. Samples and prompts: ![Dainty Mix free ai image generator samples](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/Mt7H31tOOYJiMnGcdWE-I.png) (Click for larger) Top left: High Quality, Masterpiece, pretty cute Girl, Shirt, Pants , detailed chibi Eyes, blomalley, Small Sclera, pink hair, blue hoodie, tomboy, Pink Boots, thigh, fingerless gloves, Spats Top right: girl mini cute style, sitting in gaming room, A wholesome animation key shot at computer monitor, cinematic lighting, studio ghibli, pixar and disney animation, anime key art by ROSSDRAWS and Clay Mann, style of maple story, shade, soft chibi, girl Bottom left: masterpiece, best quality, ultra detailed, 8k, 4k, intricate,highly detailed, detailed background,detailed landscape, ((portrait)), foodstyle,simple background, food, black background, plate, realistic, burger, meat, food focus, cheese, still life, lettuce Bottom right: Pretty CUTE girl, detailed CHIBI EYES, gorgeous detailed hair, Magazine ad, chess, iconic, 1949, sharp focus. acrylic art โ€‹on canvas by ROSSDRAWS and Clay Mann and Hayao Miyazaki and artgerm
ChrisWilson011016/5EcQw2VSUV3PA3e84UAQgU2X9dzKX7zVzzDK4U89UvHvGitQ_vgg
ChrisWilson011016
"2024-03-04T18:50:44Z"
1,405
0
keras
[ "keras", "region:us" ]
null
"2024-02-24T15:10:16Z"
Entry not found
Vikhrmodels/Vikhr-7B-instruct_0.4
Vikhrmodels
"2024-05-24T10:10:50Z"
1,405
26
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ru", "en", "arxiv:2405.13929", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-10T11:54:34Z"
--- library_name: transformers language: - ru - en --- # ะ ะตะปะธะท ะฒะธั…ั€ั 0.3-0.4 ะ”ะพะปะธะปะธ ัะธะปัŒะฝะพ ะฑะพะปัŒัˆะต ะดะฐะฝะฝั‹ั… ะฒ sft, ั‚ะตะฟะตั€ัŒ ัั‚ะฐะฑะธะปัŒะฝะตะต ั€ะฐะฑะพั‚ะฐะตั‚ json ะธ multiturn, ัะปะตะณะบะฐ ะฟะพะดั‚ะพั‡ะธะปะธ ะฟะฐั€ะฐะผะตั‚ั€ั‹ ะฟั€ะตั‚ั€ะตะฝะฐ ะผะพะดะตะปะธ Added a lot more data to sft, now json and multiturn work more stable on long context and hard prompts - [Google Colab](https://colab.research.google.com/drive/15O9LwZhVUa1LWhZa2UKr_B-KOKenJBvv#scrollTo=5EeNFU2-9ERi) - [GGUF](https://huggingface.co/Vikhrmodels/Vikhr-7B-instruct_0.4-GGUF) ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model = AutoModelForCausalLM.from_pretrained("Vikhrmodels/Vikhr-7B-instruct_0.4", device_map="auto", attn_implementation="flash_attention_2", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained("Vikhrmodels/Vikhr-7B-instruct_0.4") from transformers import AutoTokenizer, pipeline pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) prompts = [ "ะ’ ั‡ะตะผ ั€ะฐะทะฝะธั†ะฐ ะผะตะถะดัƒ ั„ั€ัƒะบั‚ะพะผ ะธ ะพะฒะพั‰ะตะผ?", "ะ“ะพะดั‹ ะถะธะทะฝะธ ะบะพะปะผะฐะณะพั€ะพะฒะฐ?"] def test_inference(prompt): prompt = pipe.tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=False, add_generation_prompt=True) print(prompt) outputs = pipe(prompt, max_new_tokens=512, do_sample=True, num_beams=1, temperature=0.25, top_k=50, top_p=0.98, eos_token_id=79097) return outputs[0]['generated_text'][len(prompt):].strip() for prompt in prompts: print(f" prompt:\n{prompt}") print(f" response:\n{test_inference(prompt)}") print("-"*50) ``` ``` @article{nikolich2024vikhr, title={Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for Russian}, author={Aleksandr Nikolich and Konstantin Korolev and Artem Shelmanov}, journal={arXiv preprint arXiv:2405.13929}, year={2024}, url={https://arxiv.org/pdf/2405.13929} } ```
unsloth/Qwen2-7B
unsloth
"2024-06-06T17:18:38Z"
1,405
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-06T16:46:35Z"
--- language: - en license: apache-2.0 library_name: transformers tags: - unsloth - transformers - qwen2 --- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth! We have a Google Colab Tesla T4 notebook for Qwen2 7b here: https://colab.research.google.com/drive/1mvwsIQWDs2EdZxZQF9pRGnnOvE86MVvR?usp=sharing And a Colab notebook for [Qwen2 0.5b](https://colab.research.google.com/drive/1-7tjDdMAyeCueyLAwv6vYeBMHpoePocN?usp=sharing) and another for [Qwen2 1.5b](https://colab.research.google.com/drive/1W0j3rP8WpgxRdUgkb5l6E00EEVyjEZGk?usp=sharing) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/u54VK8m8tk) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## โœจ Finetune for Free All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face. | Unsloth supports | Free Notebooks | Performance | Memory use | |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------| | **Llama-3 8b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less | | **Gemma 7b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less | | **Mistral 7b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less | | **Llama-2 7b** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less | | **TinyLlama** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less | | **CodeLlama 34b** A100 | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less | | **Mistral 7b** 1xT4 | [โ–ถ๏ธ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less | | **DPO - Zephyr** | [โ–ถ๏ธ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less | - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates. - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr. - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
r1char9/ruT5_q_a
r1char9
"2024-06-14T07:21:46Z"
1,405
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "T5", "russian", "question-answering", "text-generation", "ru", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
question-answering
"2024-06-12T06:07:00Z"
--- license: mit language: - ru pipeline_tag: question-answering tags: - T5 - russian - question-answering - text-generation --- ะœะพะดะตะปัŒ [ruT5-base](https://huggingface.co/ai-forever/ruT5-base) ะฑั‹ะปะฐ fine-tuned ะดะปั ะทะฐะดะฐั‡ะธ __question answer__, ะฟั€ะตะดะฝะฐะทะฝะฐั‡ะตะฝะฝะฐั ะดะปั __Russian__ ั‚ะตะบัั‚. ## Uses ``` from transformers import AutoTokenizer, T5ForConditionalGeneration qa_checkpoint = 'r1char9/ruT5_q_a' qa_model = T5ForConditionalGeneration.from_pretrained(qa_checkpoint) qa_tokenizer = AutoTokenizer.from_pretrained(qa_checkpoint) prompt='ะะฐั€ะธััƒะน ะธะทะพะฑั€ะฐะถะตะฝะธะต ะขะพะผะฐัะฐ ะจะตะปะฑะธ' def question_answering(prompt): question = "ะงั‚ะพ ะฝัƒะถะฝะพ ะฝะฐั€ะธัะพะฒะฐั‚ัŒ?" tokenized_sentence = qa_tokenizer(prompt, question, return_tensors='pt') res = qa_model.generate(**tokenized_sentence) decoded_res = qa_tokenizer.decode(res[0], skip_special_tokens=True) return decoded_res prompt = question_answering(prompt) # 'ะธะทะพะฑั€ะฐะถะตะฝะธะต ะขะพะผะฐัะฐ ะจะตะปะฑะธ' ```
TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16
TheBloke
"2023-07-02T20:34:58Z"
1,404
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "arxiv:2302.13971", "arxiv:2306.05685", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-30T18:20:55Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # LmSys' Vicuna 33B 1.3 (final) fp16 This is fp16 pytorch format model files for [LmSys' Vicuna 33B 1.3 (final)](https://huggingface.co/lmsys/vicuna-33b-v1.3) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: LmSys' Vicuna 33B 1.3 (final) # Vicuna Model Card ## Model Details Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. - **Developed by:** [LMSYS](https://lmsys.org/) - **Model type:** An auto-regressive language model based on the transformer architecture. - **License:** Non-commercial license - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971). ### Model Sources - **Repository:** https://github.com/lm-sys/FastChat - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/ - **Paper:** https://arxiv.org/abs/2306.05685 - **Demo:** https://chat.lmsys.org/ ## Uses The primary use of Vicuna is research on large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## How to Get Started with the Model Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights. APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api. ## Training Details Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning. The training data is around 140K conversations collected from ShareGPT.com. See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Evaluation Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf). ## Difference between different versions of Vicuna See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
KappaNeuro/needlepoint
KappaNeuro
"2023-09-14T10:00:07Z"
1,404
3
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "embroidery", "style", "needlepoint", "needlework", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:other", "region:us" ]
text-to-image
"2023-09-14T10:00:03Z"
--- license: other tags: - text-to-image - stable-diffusion - lora - diffusers - embroidery - style - needlepoint - needlework base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: Needlepoint page widget: - text: "Needlepoint - 8k, very simple overgrown greek landscape folly sewn patch" - text: "Needlepoint - a coarse cross-stitch of a big ol' red truck driving from Vegas to Nashville" - text: "Needlepoint - A mixed media art piece featuring an optical illusion pattern in the shape of a triangle made entirely out of particle board, overlaid with delicate embroidery in a creamy chenille thread. The colors of the embroidery should be a combination of frozen civilization, tropical escape and wild cranberry with hints of scorpion and Penelope. The embroidery should feel alive and three-dimensional, making the rigid surface of the particle board take on a sinewy, swirling appearance." - text: "Needlepoint - a beautiful fabric embroidered minimalist Homer Simpson and Ronald McDonald made by a contemporary artist, textile art, contemporary artist embroidery, mockup, Simpsons, large embroidery art of Homer Simpsons from the tv show the Simpsons, McDonald tone, award winning photo, bright natural light, ultra-detailed, intricate details, 16K, volumetric lighting, best quality" - text: "Needlepoint - Introduce flowing and organic forms with crewelwork stitches, inspired by nature and folklore. Use a variety of colorful threads to create texture and dimension.Dive into the vibrant and eclectic world of Stella Jean and create a whimsical embroidery design that merges various embroidery techniques with her unique style. Combining elements of Canvaswork, Blackwork, Crewelwork, Goldwork, Silk Shading, Whitework, and Raised Work, you will craft a piece that celebrates the beauty of cultural diversity and the fusion of different artistic traditions. Canvaswork: Incorporate geometric patterns inspired by traditional textiles from different cultures, using vibrant thread colors to create a bold and eye-catching design" - text: "Needlepoint - non-binary self portrait in embroidery, embroidery hoop, middle age, pensive cautious optimism, portrait reflecting their best self, soft pallette, i will not drown in the liqueur of safety" - text: "Needlepoint - yarn wool intricate embroidery texture 3D beautifully stitched fabric embroidered neutral rainbows and wildflowers on hills and fairy houses quilted background cottagecore photorealistic" - text: "Needlepoint - A painting of Rolling hills with a country road and an active volcano in the background made of crochet polyester. Framed in a modern frame. Studio shot on a light gray backdrop." - text: "Needlepoint - father's day stitching, close-up view father and daughter playing and laughing together, eye-level view, soft colors, minimalistic background, in the style of Cross-stitching" - text: "Needlepoint - a boho design of flowers and mushrooms that looks like it was printed on an 3d embroidered cross stitch background" --- # Needlepoint ([CivitAI](https://civitai.com/models/154077) ![Image 0](2329721.jpeg) > Needlepoint - 8k, very simple overgrown greek landscape folly sewn patch <p>Needlepoint is a form of embroidery that involves stitching decorative designs onto canvas using a needle and thread. It is often done on a stiff canvas mesh that provides a grid for the stitches.</p><p>In needlepoint, the design is typically pre-printed or hand-drawn onto the canvas, serving as a guide for the stitching process. The stitcher then uses various types of stitches, such as tent stitch, basketweave stitch, or cross-stitch, to fill in the design and create textured and colorful patterns.</p><p>Needlepoint allows for a wide range of design possibilities, from intricate and detailed images to simple geometric shapes or monograms. The choice of threads and colors adds depth and visual interest to the finished piece.</p><p>Needlepoint is a versatile craft that can be used to create various items, including decorative wall hangings, pillows, purses, and even clothing accessories like belts or shoes. It offers an opportunity for personal expression and creativity, allowing stitchers to customize designs and incorporate their own artistic flair.</p><p>The process of needlepoint can be relaxing and meditative, as the repetitive nature of stitching can provide a sense of focus and calm. It also offers a way to create lasting heirlooms or unique gifts for loved ones.</p><p>Needlepoint has a rich history, with examples of this craft dating back centuries. It continues to be practiced and enjoyed by people of all ages, combining artistic expression with a satisfying and enjoyable stitching process.</p> ## Image examples for the model: ![Image 1](2329670.jpeg) > Needlepoint - a coarse cross-stitch of a big ol' red truck driving from Vegas to Nashville ![Image 2](2329532.jpeg) > Needlepoint - A mixed media art piece featuring an optical illusion pattern in the shape of a triangle made entirely out of particle board, overlaid with delicate embroidery in a creamy chenille thread. The colors of the embroidery should be a combination of frozen civilization, tropical escape and wild cranberry with hints of scorpion and Penelope. The embroidery should feel alive and three-dimensional, making the rigid surface of the particle board take on a sinewy, swirling appearance. ![Image 3](2329566.jpeg) > Needlepoint - a beautiful fabric embroidered minimalist Homer Simpson and Ronald McDonald made by a contemporary artist, textile art, contemporary artist embroidery, mockup, Simpsons, large embroidery art of Homer Simpsons from the tv show the Simpsons, McDonald tone, award winning photo, bright natural light, ultra-detailed, intricate details, 16K, volumetric lighting, best quality ![Image 4](2329535.jpeg) > Needlepoint - Introduce flowing and organic forms with crewelwork stitches, inspired by nature and folklore. Use a variety of colorful threads to create texture and dimension.Dive into the vibrant and eclectic world of Stella Jean and create a whimsical embroidery design that merges various embroidery techniques with her unique style. Combining elements of Canvaswork, Blackwork, Crewelwork, Goldwork, Silk Shading, Whitework, and Raised Work, you will craft a piece that celebrates the beauty of cultural diversity and the fusion of different artistic traditions. Canvaswork: Incorporate geometric patterns inspired by traditional textiles from different cultures, using vibrant thread colors to create a bold and eye-catching design ![Image 5](2329538.jpeg) > Needlepoint - non-binary self portrait in embroidery, embroidery hoop, middle age, pensive cautious optimism, portrait reflecting their best self, soft pallette, i will not drown in the liqueur of safety ![Image 6](2329534.jpeg) > Needlepoint - yarn wool intricate embroidery texture 3D beautifully stitched fabric embroidered neutral rainbows and wildflowers on hills and fairy houses quilted background cottagecore photorealistic ![Image 7](2329604.jpeg) > Needlepoint - A painting of Rolling hills with a country road and an active volcano in the background made of crochet polyester. Framed in a modern frame. Studio shot on a light gray backdrop. ![Image 8](2329603.jpeg) > Needlepoint - father's day stitching, close-up view father and daughter playing and laughing together, eye-level view, soft colors, minimalistic background, in the style of Cross-stitching ![Image 9](2329611.jpeg) > Needlepoint - a boho design of flowers and mushrooms that looks like it was printed on an 3d embroidered cross stitch background
NeverSleep/Noromaid-13b-v0.2
NeverSleep
"2023-12-16T16:59:37Z"
1,404
21
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-10T21:16:16Z"
--- license: cc-by-nc-4.0 --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/VKX2Z2yjZX5J8kXzgeCYO.png) --- # Disclaimer: ## This is a ***VERY EXPERIMENTAL*** version, don't expect everything to work!!! ## If you don't like this model, use [Noromaid 0.1.1](https://huggingface.co/NeverSleep/Noromaid-13b-v0.1.1) You may use our custom **prompting format**(scroll down to download them!), or simple alpaca. **(Choose which fits best for you!)** Expect that many things will change in the next version!! --- Mergemonster and a new dataset were used. If you want a 7b, or 20b hit us up in the Community tab! # This model is a collab between [IkariDev](https://huggingface.co/IkariDev) and [Undi](https://huggingface.co/Undi95)! Test model. Suitable for RP, ERP and general stuff. [Recommended settings - No settings yet(Please suggest some over in the Community tab!)] <!-- description start --> ## Description <!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) --> This repo contains FP16 files of Noromaid-13b-v0.2. [FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2) <!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)--> <!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)--> <!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)--> <!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)--> <!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)--> [GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/Noromaid-13b-v0.2-GGUF) <!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)--> ## Ratings: Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here! No ratings yet! If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi". <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Custom format: UPDATED!! SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ## Training data used: - [no_robots dataset](https://huggingface.co/Undi95/Llama2-13B-no_robots-alpaca-lora) let the model have more human behavior, enhances the output. - [Aesir Private RP dataset] New data from a new and never used before dataset, add fresh data, no LimaRP spam, this is 100% new. Thanks to the [MinvervaAI Team](https://huggingface.co/MinervaAI) and, in particular, [Gryphe](https://huggingface.co/Gryphe) for letting us use it! - [Another private Aesir dataset] ## Others Undi: If you want to support me, you can [here](https://ko-fi.com/undiai). IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
ISTA-DASLab/Meta-Llama-3-8B-Instruct
ISTA-DASLab
"2024-04-23T09:10:56Z"
1,404
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-3", "conversational", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-23T08:06:51Z"
--- language: - en pipeline_tag: text-generation tags: - facebook - meta - pytorch - llama - llama-3 license: other license_name: llama3 license_link: LICENSE extra_gated_prompt: >- ### META LLAMA 3 COMMUNITY LICENSE AGREEMENT Meta Llama 3 Version Release Date: April 18, 2024 "Agreement" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein. "Documentation" means the specifications, manuals and documentation accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/. "Licensee" or "you" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entityโ€™s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf. "Meta Llama 3" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at https://llama.meta.com/llama-downloads. "Llama Materials" means, collectively, Metaโ€™s proprietary Meta Llama 3 and Documentation (and any portion thereof) made available under this Agreement. "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). 1. License Rights and Redistribution. a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Metaโ€™s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials. b. Redistribution and Use. i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service that uses any of them, including another AI model, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display โ€œBuilt with Meta Llama 3โ€ on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include โ€œLlama 3โ€ at the beginning of any such AI model name. ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you. iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a โ€œNoticeโ€ text file distributed as a part of such copies: โ€œMeta Llama 3 is licensed under the Meta Llama 3 Community License, Copyright ยฉ Meta Platforms, Inc. All Rights Reserved.โ€ iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference into this Agreement. v. You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Meta Llama 3 or derivative works thereof). 2. Additional Commercial Terms. If, on the Meta Llama 3 version release date, the monthly active users of the products or services made available by or for Licensee, or Licenseeโ€™s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights. 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN โ€œAS ISโ€ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS. 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING. 5. Intellectual Property. a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use โ€œLlama 3โ€ (the โ€œMarkโ€) solely as required to comply with the last sentence of Section 1.b.i. You will comply with Metaโ€™s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All goodwill arising out of your use of the Mark will inure to the benefit of Meta. b. Subject to Metaโ€™s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications. c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials. 6. Term and Termination. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement. 7. Governing Law and Jurisdiction. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement. ### Meta Llama 3 Acceptable Use Policy Meta is committed to promoting safe and fair use of its tools and features, including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable Use Policy (โ€œPolicyโ€). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy) #### Prohibited Uses We want everyone to use Meta Llama 3 safely and responsibly. You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate the law or othersโ€™ rights, including to: 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as: 1. Violence or terrorism 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material 3. Human trafficking, exploitation, and sexual violence 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials. 5. Sexual solicitation 6. Any other criminal activity 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama Materials 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system 2. Engage in, promote, incite, facilitate, or assist in the planning or development of activities that present a risk of death or bodily harm to individuals, including use of Meta Llama 3 related to the following: 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State 2. Guns and illegal weapons (including weapon development) 3. Illegal drugs and regulated/controlled substances 4. Operation of critical infrastructure, transportation technologies, or heavy machinery 5. Self-harm or harm to others, including suicide, cutting, and eating disorders 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual 3. Intentionally deceive or mislead others, including use of Meta Llama 3 related to the following: 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content 3. Generating, promoting, or further distributing spam 4. Impersonating another individual without consent, authorization, or legal right 5. Representing that the use of Meta Llama 3 or outputs are human-generated 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement 4. Fail to appropriately disclose to end users any known dangers of your AI system Please report any violation of this Policy, software โ€œbug,โ€ or other problems that could lead to a violation of this Policy through one of the following means: * Reporting issues with the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3) * Reporting risky content generated by the model: developers.facebook.com/llama_output_feedback * Reporting bugs and security concerns: facebook.com/whitehat/info * Reporting violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected] extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- ## Model Details Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety. **Model developers** Meta **Variations** Llama 3 comes in two sizes โ€” 8B and 70B parameters โ€” in pre-trained and instruction tuned variants. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama 3 </td> <td rowspan="2" >A new mix of publicly available online data. </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >15T+ </td> <td>March, 2023 </td> </tr> <tr> <td>70B </td> <td>8k </td> <td>Yes </td> <td>December, 2023 </td> </tr> </table> **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability. **Model Release Date** April 18, 2024. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes). ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use This repository contains two versions of Meta-Llama-3-8B-Instruct, for use with transformers and with the original `llama3` codebase. ### Use with transformers You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "meta-llama/Meta-Llama-3-8B-Instruct" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ### Use with `llama3` Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3) To download Original checkpoints, see the example command below leveraging `huggingface-cli`: ``` huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct ``` For Hugging Face support, we recommend using transformers or TGI, but a similar command works. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Metaโ€™s sustainability program. <table> <tr> <td> </td> <td><strong>Time (GPU hours)</strong> </td> <td><strong>Power Consumption (W)</strong> </td> <td><strong>Carbon Emitted(tCO2eq)</strong> </td> </tr> <tr> <td>Llama 3 8B </td> <td>1.3M </td> <td>700 </td> <td>390 </td> </tr> <tr> <td>Llama 3 70B </td> <td>6.4M </td> <td>700 </td> <td>1900 </td> </tr> <tr> <td>Total </td> <td>7.7M </td> <td> </td> <td>2290 </td> </tr> </table> **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. ## Benchmarks In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md). ### Base pretrained models <table> <tr> <td><strong>Category</strong> </td> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama2 7B</strong> </td> <td><strong>Llama2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama2 70B</strong> </td> </tr> <tr> <td rowspan="6" >General </td> <td>MMLU (5-shot) </td> <td>66.6 </td> <td>45.7 </td> <td>53.8 </td> <td>79.5 </td> <td>69.7 </td> </tr> <tr> <td>AGIEval English (3-5 shot) </td> <td>45.9 </td> <td>28.8 </td> <td>38.7 </td> <td>63.0 </td> <td>54.8 </td> </tr> <tr> <td>CommonSenseQA (7-shot) </td> <td>72.6 </td> <td>57.6 </td> <td>67.6 </td> <td>83.8 </td> <td>78.7 </td> </tr> <tr> <td>Winogrande (5-shot) </td> <td>76.1 </td> <td>73.3 </td> <td>75.4 </td> <td>83.1 </td> <td>81.8 </td> </tr> <tr> <td>BIG-Bench Hard (3-shot, CoT) </td> <td>61.1 </td> <td>38.1 </td> <td>47.0 </td> <td>81.3 </td> <td>65.7 </td> </tr> <tr> <td>ARC-Challenge (25-shot) </td> <td>78.6 </td> <td>53.7 </td> <td>67.6 </td> <td>93.0 </td> <td>85.3 </td> </tr> <tr> <td>Knowledge reasoning </td> <td>TriviaQA-Wiki (5-shot) </td> <td>78.5 </td> <td>72.1 </td> <td>79.6 </td> <td>89.7 </td> <td>87.5 </td> </tr> <tr> <td rowspan="4" >Reading comprehension </td> <td>SQuAD (1-shot) </td> <td>76.4 </td> <td>72.2 </td> <td>72.1 </td> <td>85.6 </td> <td>82.6 </td> </tr> <tr> <td>QuAC (1-shot, F1) </td> <td>44.4 </td> <td>39.6 </td> <td>44.9 </td> <td>51.1 </td> <td>49.4 </td> </tr> <tr> <td>BoolQ (0-shot) </td> <td>75.7 </td> <td>65.5 </td> <td>66.9 </td> <td>79.0 </td> <td>73.1 </td> </tr> <tr> <td>DROP (3-shot, F1) </td> <td>58.4 </td> <td>37.9 </td> <td>49.8 </td> <td>79.7 </td> <td>70.2 </td> </tr> </table> ### Instruction tuned models <table> <tr> <td><strong>Benchmark</strong> </td> <td><strong>Llama 3 8B</strong> </td> <td><strong>Llama 2 7B</strong> </td> <td><strong>Llama 2 13B</strong> </td> <td><strong>Llama 3 70B</strong> </td> <td><strong>Llama 2 70B</strong> </td> </tr> <tr> <td>MMLU (5-shot) </td> <td>68.4 </td> <td>34.1 </td> <td>47.8 </td> <td>82.0 </td> <td>52.9 </td> </tr> <tr> <td>GPQA (0-shot) </td> <td>34.2 </td> <td>21.7 </td> <td>22.3 </td> <td>39.5 </td> <td>21.0 </td> </tr> <tr> <td>HumanEval (0-shot) </td> <td>62.2 </td> <td>7.9 </td> <td>14.0 </td> <td>81.7 </td> <td>25.6 </td> </tr> <tr> <td>GSM-8K (8-shot, CoT) </td> <td>79.6 </td> <td>25.7 </td> <td>77.4 </td> <td>93.0 </td> <td>57.5 </td> </tr> <tr> <td>MATH (4-shot, CoT) </td> <td>30.0 </td> <td>3.8 </td> <td>6.7 </td> <td>50.4 </td> <td>11.6 </td> </tr> </table> ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Llama 3-Instruct As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case. <span style="text-decoration:underline;">Safety</span> For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable. <span style="text-decoration:underline;">Refusals</span> In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. Weโ€™ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2. We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). #### Critical risks <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives) We have conducted a two fold assessment of the safety of the model in this area: * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks. * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model). ### <span style="text-decoration:underline;">Cyber Security </span> We have evaluated Llama 3 with CyberSecEval, Metaโ€™s cybersecurity safety eval suite, measuring Llama 3โ€™s propensity to suggest insecure code when used as a coding assistant, and Llama 3โ€™s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval). ### <span style="text-decoration:underline;">Child Safety</span> Child Safety risk assessments were conducted using a team of experts, to assess the modelโ€™s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences. ### Community Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama). Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community. ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3โ€™s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ## Contributors Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
John6666/asyncs-mix-pony-pony-v1-sdxl
John6666
"2024-06-26T18:23:27Z"
1,404
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-26T18:18:07Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony --- Original model is [here](https://civitai.com/models/18205?modelVersionId=599894).
cahya/gpt2-small-indonesian-522M
cahya
"2021-05-21T14:41:35Z"
1,403
6
transformers
[ "transformers", "pytorch", "tf", "jax", "gpt2", "text-generation", "id", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2022-03-02T23:29:05Z"
--- language: "id" license: "mit" datasets: - Indonesian Wikipedia widget: - text: "Pulau Dewata sering dikunjungi" --- # Indonesian GPT2 small model ## Model description It is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers) ## Intended uses & limitations ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='cahya/gpt2-small-indonesian-522M') >>> set_seed(42) >>> generator("Kerajaan Majapahit adalah", max_length=30, num_return_sequences=5, num_beams=10) [{'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini berdiri pada abad ke-14'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-16. Kerajaan ini berdiri pada abad ke-14'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini berdiri pada abad ke-15'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-16. Kerajaan ini berdiri pada abad ke-15'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini merupakan kelanjutan dari Kerajaan Majapahit yang'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model model_name='cahya/gpt2-small-indonesian-522M' tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2Model.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import GPT2Tokenizer, TFGPT2Model model_name='cahya/gpt2-small-indonesian-522M' tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = TFGPT2Model.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 52,000. The inputs are sequences of 128 consecutive tokens.
medicalai/ClinicalGPT-base-zh
medicalai
"2023-06-21T16:01:38Z"
1,403
21
transformers
[ "transformers", "pytorch", "bloom", "text-generation", "medical", "license:afl-3.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-21T12:31:50Z"
--- license: afl-3.0 tags: - medical --- # ClinicalGPT This model card introduces ClinicalGPT model, a large language model designed and optimized for clinical scenarios. ClinicalGPT is fine-tuned on extensive and diverse medical datasets, including medical records, domain-specific knowledge, and multi-round dialogue consultations. The model is undergoing ongoing and continuous updates. ## Model Fine-tuning We set the learning rate to 5e-5, with a batch size of 128 and a maximum length of 1,024, training across 3 epochs. ## How to use the model Load the model via the transformers library: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("medicalai/ClinicalGPT-base-zh") model = AutoModelForCausalLM.from_pretrained("medicalai/ClinicalGPT-base-zh") ``` ## Limitations The project is intended for research purposes only and restricted from commercial or clinical use. The generated content by the model is subject to factors such as model computations, randomness, misinterpretation, and biases, and this project cannot guarantee its accuracy. This project assumes no legal liability for any content produced by the model. Users are advised to exercise caution and independently verify the generated results.
vibhorag101/llama-2-13b-chat-hf-phr_mental_therapy
vibhorag101
"2024-02-20T04:55:09Z"
1,403
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "dataset:vibhorag101/phr_mental_therapy_dataset", "dataset:jerryjalapeno/nart-100k-synthetic", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-17T05:14:36Z"
--- license: mit datasets: - vibhorag101/phr_mental_therapy_dataset - jerryjalapeno/nart-100k-synthetic language: - en pipeline_tag: text-generation --- # Model Card <!-- Provide a quick summary of what the model is/does. --> - This model is a finetune of the **llama-2-13b-chat-hf** model on a therapy dataset. - The model aims to provide basic therapy to the users and improve their mental health until they seek professional help. - The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below. ## Model Details ### Training Hardware - RTX A5000 24GB - 48 Core Intel Xeon - 128GB Ram. ### Model Hyperparameters - This [training script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/finetuneScriptLLaMA-2.ipynb) was used to do the finetuning. - The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/llamaDataMaker.ipynb). - num_train_epochs = 2 - per_device_train_batch_size = 2 - per_device_eval_batch_size = 2 - gradient_accumulation_steps = 1 - max_seq_length = 4096 - lora_r = 64 - lora_alpha = 16 - lora_dropout = 0.1 - use_4bit = True - bnb_4bit_compute_dtype = "float16" - bnb_4bit_quant_type = "nf4" - use_nested_quant = False - fp16 = False - bf16 = True - Data Sample: 1000 (80:20 split) ### Model System Prompt You are a helpful and joyous mental therapy assistant. Always answer as helpfully and cheerfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. #### Model Training Data ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64eb1e4a55e4f0ecb9c4f406/x298HbUKHrom-RFmNgSbH.png) ### Model Benchmarks Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vibhorag101__llama-2-13b-chat-hf-phr_mental_therapy) | Metric | Value | |-----------------------|---------------------------| | Avg. | 42.5 | | ARC (25-shot) | 38.82 | | HellaSwag (10-shot) | 72.76 | | MMLU (5-shot) | 23.12 | | TruthfulQA (0-shot) | 46.92 | | Winogrande (5-shot) | 65.59 | | GSM8K (5-shot) | 7.81 |
KoboldAI/LLaMA2-13B-Estopia
KoboldAI
"2024-01-14T13:49:19Z"
1,403
18
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:TheBloke/Llama-2-13B-fp16", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-14T12:40:29Z"
--- base_model: - TheBloke/Llama-2-13B-fp16 tags: - mergekit - merge license: cc-by-nc-4.0 --- # Introduction - Estopia is a model focused on improving the dialogue and prose returned when using the instruct format. As a side benefit, character cards and similar seem to have also improved, remembering details well in many cases. - It focuses on "guided narratives" - using instructions to guide or explore fictional stories, where you act as a guide for the AI to narrate and fill in the details. - It has primarily been tested around prose, using instructions to guide narrative, detail retention and "neutrality" - in particular with regards to plot armour. Unless you define different rules for your adventure / narrative with instructions, it should be realistic in the responses provided. - It has been tested using different modes, such as instruct, chat, adventure and story modes - and should be able to do them all to a degree, with it's strengths being instruct and adventure, with story being a close second. # Usage - The Estopia model has been tested primarily using the Alpaca format, but with the range of models included likely has some understanding of others. Some examples of tested formats are below: - ```\n### Instruction:\nWhat colour is the sky?\n### Response:\nThe sky is...``` - ```<Story text>\n***\nWrite a summary of the text above\n***\nThe story starts by...``` - Using the Kobold Lite AI adventure mode - ```User:Hello there!\nAssistant:Good morning...\n``` - For settings, the following are recommended for general use: - Temperature: 0.8-1.2 - Min P: 0.05-0.1 - Max P: 0.92, or 1 if using a Min P greater than 0 - Top K: 0 - Response length: Higher than your usual amount most likely - for example a common value selected is 512. - Note: Response lengths are not guaranteed to always be this length. On occasion, responses may be shorter if they convey the response entirely, other times they could be upwards of this value. It depends mostly on the character card, instructions, etc. - Rep Pen: 1.1 - Rep Pen Range: 2 or 3x your response length - Stopping tokens (Not needed, but can help if the AI is writing too much): - ```##||$||---||$||ASSISTANT:||$||[End||$||</s>``` - A single string for Kobold Lite combining the ones below - ```##``` - ```---``` - ```ASSISTANT:``` - ```[End``` - ```</s>``` - The settings above should provide a generally good experience balancing instruction following and creativity. Generally the higher you set the temperature, the greater the creativity and higher chance of logical errors when providing responses from the AI. # Recipe This model was made in three stages, along with many experimental stages which will be skipped for brevity. The first was internally referred to as EstopiaV9, which has a high degree of instruction following and creativity in responses, though they were generally shorter and a little more restricted in the scope of outputs, but conveyed nuance better. ```yaml merge_method: task_arithmetic base_model: TheBloke/Llama-2-13B-fp16 models: - model: TheBloke/Llama-2-13B-fp16 - model: Undi95/UtopiaXL-13B parameters: weight: 1.0 - model: Doctor-Shotgun/cat-v1.0-13b parameters: weight: 0.02 - model: PygmalionAI/mythalion-13b parameters: weight: 0.10 - model: Undi95/Emerhyst-13B parameters: weight: 0.05 - model: CalderaAI/13B-Thorns-l2 parameters: weight: 0.05 - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.20 dtype: float16 ``` The second part of the merge was known as EstopiaV13. This produced responses which were long, but tended to write beyond good stopping points for further instructions to be added as it leant heavily on novel style prose. It did however benefit from a greater degree of neutrality as described above, and retained many of the detail tracking abilities of V9. ```yaml merge_method: task_arithmetic base_model: TheBloke/Llama-2-13B-fp16 models: - model: TheBloke/Llama-2-13B-fp16 - model: Undi95/UtopiaXL-13B parameters: weight: 1.0 - model: Doctor-Shotgun/cat-v1.0-13b parameters: weight: 0.01 - model: chargoddard/rpguild-chatml-13b parameters: weight: 0.02 - model: PygmalionAI/mythalion-13b parameters: weight: 0.08 - model: CalderaAI/13B-Thorns-l2 parameters: weight: 0.02 - model: KoboldAI/LLaMA2-13B-Tiefighter parameters: weight: 0.20 dtype: float16 ``` The third step was a merge between the two to retain the benefits of both as much as possible. This was performed using the dare merging technique. ```yaml # task-arithmetic style models: - model: EstopiaV9 parameters: weight: 1 density: 1 - model: EstopiaV13 parameters: weight: 0.05 density: 0.30 merge_method: dare_ties base_model: TheBloke/Llama-2-13B-fp16 parameters: int8_mask: true dtype: bfloat16 ``` # Model selection - Undi95/UtopiaXL-13B - Solid all around base for models, with the ability to write longer responses and generally good retension to detail. - Doctor-Shotgun/cat-v1.0-13b - A medical focused model which is added to focus a little more on the human responses, such as for psycology. - PygmalionAI/mythalion-13b - A roleplay and instruct focused model, which improves attentiveness to character card details and the variety of responses - Undi95/Emerhyst-13B - A roleplay but also longer form response model. It can be quite variable, but helps add to the depth and possible options the AI can respond with during narratives. - CalderaAI/13B-Thorns-l2 - A neutral and very attentive model. It is good at chat and following instructions, which help benefit these modes. - KoboldAI/LLaMA2-13B-Tiefighter - A solid all around model, focusing on story writing and adventure modes. It provides all around benefits to creativity and the prose in models, along with adventure mode support. - chargoddard/rpguild-chatml-13b - A roleplay model, which introduces new data and also improves the detail retention in longer narratives. # Notes - With the differing models inside, this model will not have perfect end of sequence tokens which is a problem many merges can share. While attempts have been made to minimise this, you may occasionally get oddly behaving tokens - this should be possible to resolve with a quick manual edit once and the model should pick up on it. - Chat is one of the least tested areas for this model. It works fairly well, but it can be quite character card dependant. - This is a narrative and prose focused model. As a result, it can and will talk for you if guided to do so (such as asking it to act as a co-author or narrator) within instructions or other contexts. This can be mitigated mostly by adding instructions to limit this, or using chat mode instead. # Future areas - Llava - Some success has been had with merging the llava lora on this. While no in depth testing has been performed, more narrative responses based on the images could be obtained - though there were drawbacks in the form of degraded performance in other areas, and hallucinations due to the fictional focus of this model. - Stheno - A merge which has similar promise from Sao. Some merge attempts have been made between the two and were promising, but not entirely consistent at the moment. With some possible refinement, this could produce an even stronger model. - DynamicFactor - All the merges used have been based on llama two in this merge, but a dare merge with dynamic factor (an attempted refinement of llama two) showed a beneficial improvement to the instruction abilities of the model, along with lengthy responses. It lost a little of the variety of responses, so perhaps if a balance of it could be added the instruction abilities and reasoning could be improved even further.
bullerwins/Codestral-22B-v0.1-hf
bullerwins
"2024-05-30T22:22:21Z"
1,403
15
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-29T17:35:02Z"
--- inference: false license: other license_name: mnpl license_link: https://mistral.ai/licences/MNPL-0.1.md tags: - code language: - code --- UPDATE 31/05/2024: Fixed the tokenizer issue after the update or the original model: https://huggingface.co/mistralai/Codestral-22B-v0.1/discussions/10 Converted using [this](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf/blob/main/convert_mistral_weights_to_hf-22B.py) script # Model Card for Codestral-22B-v0.1 Codestrall-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried: - As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications - As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code) ## Installation It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference). ``` pip install mistral_inference ``` ## Download ```py from huggingface_hub import snapshot_download from pathlib import Path mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1') mistral_models_path.mkdir(parents=True, exist_ok=True) snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path) ``` ### Chat After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. ``` mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256 ``` Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines: ``` Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number. fn fibonacci(n: u32) -> u32 { match n { 0 => 0, 1 => 1, _ => fibonacci(n - 1) + fibonacci(n - 2), } } fn main() { let n = 10; println!("The {}th Fibonacci number is: {}", n, fibonacci(n)); } This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers. ``` ### Fill-in-the-middle (FIM) After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed: ```py from mistral_inference.model import Transformer from mistral_inference.generate import generate from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.request import FIMRequest tokenizer = MistralTokenizer.v3() model = Transformer.from_folder("~/codestral-22B-240529") prefix = """def add(""" suffix = """ return sum""" request = FIMRequest(prompt=prefix, suffix=suffix) tokens = tokenizer.encode_fim(request).tokens out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id) result = tokenizer.decode(out_tokens[0]) middle = result.split(suffix)[0].strip() print(middle) ``` Should give something along the following lines: ``` num1, num2): # Add two numbers sum = num1 + num2 # return the sum ``` ## Limitations The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs. ## License Codestral-22B-v0.1 is released under the `MNLP-0.1` license. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lรฉlio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothรฉe Lacroix, Thรฉophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf
RichardErkhov
"2024-06-05T16:03:29Z"
1,403
0
null
[ "gguf", "region:us" ]
null
"2024-06-05T15:29:02Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-medium-stable-diffusion-prompt-generator - GGUF - Model creator: https://huggingface.co/Ar4ikov/ - Original model: https://huggingface.co/Ar4ikov/gpt2-medium-stable-diffusion-prompt-generator/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-medium-stable-diffusion-prompt-generator.Q2_K.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q2_K.gguf) | Q2_K | 0.17GB | | [gpt2-medium-stable-diffusion-prompt-generator.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.IQ3_XS.gguf) | IQ3_XS | 0.18GB | | [gpt2-medium-stable-diffusion-prompt-generator.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.IQ3_S.gguf) | IQ3_S | 0.19GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q3_K_S.gguf) | Q3_K_S | 0.19GB | | [gpt2-medium-stable-diffusion-prompt-generator.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.IQ3_M.gguf) | IQ3_M | 0.2GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q3_K.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q3_K.gguf) | Q3_K | 0.21GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q3_K_M.gguf) | Q3_K_M | 0.21GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q3_K_L.gguf) | Q3_K_L | 0.23GB | | [gpt2-medium-stable-diffusion-prompt-generator.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.IQ4_XS.gguf) | IQ4_XS | 0.22GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q4_0.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q4_0.gguf) | Q4_0 | 0.23GB | | [gpt2-medium-stable-diffusion-prompt-generator.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.IQ4_NL.gguf) | IQ4_NL | 0.23GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q4_K_S.gguf) | Q4_K_S | 0.23GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q4_K.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q4_K.gguf) | Q4_K | 0.25GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q4_K_M.gguf) | Q4_K_M | 0.25GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q4_1.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q4_1.gguf) | Q4_1 | 0.25GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q5_0.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q5_0.gguf) | Q5_0 | 0.27GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q5_K_S.gguf) | Q5_K_S | 0.27GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q5_K.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q5_K.gguf) | Q5_K | 0.29GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q5_K_M.gguf) | Q5_K_M | 0.29GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q5_1.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q5_1.gguf) | Q5_1 | 0.29GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q6_K.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q6_K.gguf) | Q6_K | 0.32GB | | [gpt2-medium-stable-diffusion-prompt-generator.Q8_0.gguf](https://huggingface.co/RichardErkhov/Ar4ikov_-_gpt2-medium-stable-diffusion-prompt-generator-gguf/blob/main/gpt2-medium-stable-diffusion-prompt-generator.Q8_0.gguf) | Q8_0 | 0.41GB | Original model description: Entry not found
TheBloke/Toppy-M-7B-GGUF
TheBloke
"2023-11-07T14:57:18Z"
1,402
27
transformers
[ "transformers", "gguf", "mistral", "not-for-all-audiences", "nsfw", "base_model:Undi95/Toppy-M-7B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
null
"2023-11-07T14:53:00Z"
--- base_model: Undi95/Toppy-M-7B inference: false license: cc-by-nc-4.0 model_creator: Undi model_name: Toppy M 7B model_type: mistral prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke tags: - not-for-all-audiences - nsfw --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Toppy M 7B - GGUF - Model creator: [Undi](https://huggingface.co/Undi95) - Original model: [Toppy M 7B](https://huggingface.co/Undi95/Toppy-M-7B) <!-- description start --> ## Description This repo contains GGUF format model files for [Undi's Toppy M 7B](https://huggingface.co/Undi95/Toppy-M-7B). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Toppy-M-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Toppy-M-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF) * [Undi's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Undi95/Toppy-M-7B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [toppy-m-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes | | [toppy-m-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss | | [toppy-m-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss | | [toppy-m-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss | | [toppy-m-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [toppy-m-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss | | [toppy-m-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended | | [toppy-m-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [toppy-m-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended | | [toppy-m-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended | | [toppy-m-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss | | [toppy-m-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Toppy-M-7B-GGUF/blob/main/toppy-m-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Toppy-M-7B-GGUF and below it, a specific filename to download, such as: toppy-m-7b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Toppy-M-7B-GGUF toppy-m-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Toppy-M-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Toppy-M-7B-GGUF toppy-m-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m toppy-m-7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Toppy-M-7B-GGUF", model_file="toppy-m-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้˜ฟๆ˜Ž, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Undi's Toppy M 7B <!-- description start --> ## Description This repo contains fp16 files of Toppy-M-7B, a merge I have done with the new task_arithmetic merge method from mergekit. This project was a request from [BlueNipples](https://huggingface.co/BlueNipples) : [link](https://huggingface.co/Undi95/Utopia-13B/discussions/1) <!-- description end --> <!-- description start --> ## Models and loras used - [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) - [NousResearch/Nous-Capybara-7B-V1.9](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1.9) - [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) - [lemonilia/AshhLimaRP-Mistral-7B](lemonilia/AshhLimaRP-Mistral-7B) - [Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b](https://huggingface.co/Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b) - [Undi95/Mistral-pippa-sharegpt-7b-qlora](Undi95/Mistral-pippa-sharegpt-7b-qlora) <!-- description end --> ## The sauce ``` openchat/openchat_3.5 lemonilia/AshhLimaRP-Mistral-7B (LoRA) x 0.38 NousResearch/Nous-Capybara-7B-V1.9 Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b x 0.27 HuggingFaceH4/zephyr-7b-beta Undi95/Mistral-pippa-sharegpt-7b-qlora x 0.38 merge_method: task_arithmetic base_model: mistralai/Mistral-7B-v0.1 models: - model: mistralai/Mistral-7B-v0.1 - model: Undi95/zephyr-7b-beta-pippa-sharegpt parameters: weight: 0.42 - model: Undi95/Nous-Capybara-7B-V1.9-120-Days parameters: weight: 0.29 - model: Undi95/openchat_3.5-LimaRP-13B parameters: weight: 0.48 dtype: bfloat16 ``` <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` If you want to support me, you can [here](https://ko-fi.com/undiai). <!-- original-model-card end -->
mindy-labs/mindy-7b-v2
mindy-labs
"2023-12-20T21:30:13Z"
1,402
10
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-14T22:32:05Z"
--- license: apache-2.0 tags: - merge --- ## Model Details Mindy-7b is a *Frankenstein๐Ÿ‘ป* merge between [AIDC-ai-business/Marcoroni-7B-v3](https://huggingface.co/AIDC-ai-business/Marcoroni-7B-v3) and [Weyaxi/Seraph-7B](https://huggingface.co/Weyaxi/Seraph-7B ). ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [Mindy Group, Inc.](https://mindy.com/) - **Model type:** Transformer-based language model - **Language(s) (NLP):** English - **License:** apache-2.0 (MORE DETAILS TO BE ADDED)
Undi95/Llamix2-MLewd-4x13B
Undi95
"2023-12-17T02:13:14Z"
1,402
60
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "not-for-all-audiences", "nsfw", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-15T01:10:52Z"
--- license: cc-by-nc-4.0 tags: - not-for-all-audiences - nsfw --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/Y9cKDc4heP6TcG4ZjcwPQ.png) THIS MODEL IS MADE FOR LEWD SEXUAL, CRUDE AND KINKY CONTENT IN OUTPUT CAN AND WILL HAPPEN. YOU'RE WARNED This is a 4x13B MoE Llama2 model, one of the first (if not the first!). Always, a big thanks to [Charles Goddard](https://huggingface.co/chargoddard) who is the brain behind all of those new Mixtral model, and his amazing tools! WARNING: ALL THE "K" GGUF QUANT OF MIXTRAL MODELS SEEMS TO BE [BROKEN](https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TvjEP14ps7ZUgJ-0-mhIX.png), PREFER Q4_0, Q5_0 or Q8_0! <!-- description start --> ## Description This repo contains fp16 files of Llamix2-MLewd-4x13B, a very hot MoE of Llama2 model. <!-- description end --> <!-- description start --> ## Models used The list of model used and their activator/theme can be found [here](https://huggingface.co/Undi95/Llamix2-MLewd-4x13B/blob/main/config.yaml) <!-- description end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` Special thanks to Sushi and Shena โ™ฅ If you want to support me, you can [here](https://ko-fi.com/undiai).
riotu-lab/ArabianGPT-08B-V2
riotu-lab
"2024-04-17T06:13:38Z"
1,402
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "ArabianGPT", "ar", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-04-17T05:20:31Z"
--- license: apache-2.0 language: - ar tags: - ArabianGPT widget: - text: "ุฃุนู„ู†ุช ูˆุฒุงุฑุฉ ุงู„ุญุฌ ููŠ ุงู„ู…ู…ู„ูƒุฉ ุงู„ุนุฑุจูŠุฉ ุงู„ุณุนูˆุฏูŠุฉ" example_title: "ู…ุซุงู„ ูก" - text: "ูŠุจุฏูˆ ุงู„ูŠูˆู… ุฌู…ูŠู„ุงุŒ ุณุฃู‚ูˆู… ุจุชุญุถูŠุฑ" example_title: "ู…ุซุงู„ ูข" - text: "ุฅู† ุงู„ุชู‚ู†ูŠุงุช ุงู„ุญุฏูŠุซุฉ" example_title: "ู…ุซุงู„ ูฃ" --- # ArabianGPT Model Overview ## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation <p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.8B, and users engage with and apply the model's outputs at their own risk.</p> > **Important Note:** Currently, we offer a raw pre-trained model. Our team is actively working on releasing instruction-based LLMs that are fine-tuned and augmented with LRHF. The first set of pre-trained models has been made available for community exploration. While we do have models fine-tuned for specific tasks such as summarization and sentiment analysis, they are still in the development phase. ## How you can use this Pre-Trained? You are invited to utilize this pre-trained, native Arabic language model as an experimental tool to assess its capabilities, aid in its fine-tuning, and evaluate its performance across a variety of downstream tasks. We encourage you to review our technical report for a comprehensive understanding of the model's performance metrics and the specific downstream tasks it has been tested on. This will provide valuable insights into its applicability and effectiveness in diverse applications. ## Introduction ArabianGPT-0.8B, part of the ArabianLLM initiatives, is a specialized GPT model optimized for the Arabic language. Developed at Prince Sultan University's Robotics and Internet of Things Lab, this model is a leap forward in natural language modeling and generation for Arabic, tackling the language's unique challenges. ## Key Features - **Architecture**: GPT-2 - **Model Size**: 0.8 billion parameters - **Layers**: 36 - **Model Attention Layers (MAL)**: 20 - **Context Window Size**: 1024 tokens ## Training - **Dataset**: Scraped texts contains scientific articles, and general texts - **Data Size**: 117 GB - **Tokenizer**: Aranizer 64K - **Tokens**: Over 14 billion - **Hardware**: 5 NDIVIA A100 GPUs - **Performance**: loss of 3.6 ## Role in ArabianLLM Initiatives ArabianGPT-0.8B is crucial for advancing Arabic language processing, addressing challenges unique to Arabic morphology and dialects. ## Usage Suitable for Arabic text generation tasks. Example usage with Transformers Pipeline: ```python from transformers import pipeline pipe = pipeline("text-generation", model="riotu-lab/ArabianGPT-08B", max_new_tokens=1024) text = '' pipe(text) ``` ## Limitations and Ethical Considerations - The model may have context understanding or text generation limitations in certain scenarios. - Emphasis on ethical use to prevent misinformation or harmful content propagation. ## Acknowledgments Special thanks to Prince Sultan University, particularly the Robotics and Internet of Things Lab. ## Contact Information For inquiries: [[email protected]](mailto:[email protected]). ## Disclaimer for the Use of Large Language Models (LLMs) for Text Generation <p style="color: red;">We disclaim all responsibility for any harm, inaccuracies, or inappropriate content generated by ArabianGPT-0.3B, and users engage with and apply the model's outputs at their own risk.</p>
TheBloke/neural-chat-7B-v3-1-AWQ
TheBloke
"2023-11-17T14:02:23Z"
1,401
4
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "base_model:Intel/neural-chat-7b-v3-1", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "4-bit", "awq", "region:us" ]
text-generation
"2023-11-15T18:18:55Z"
--- base_model: Intel/neural-chat-7b-v3-1 inference: false license: apache-2.0 model_creator: Intel model_name: Neural Chat 7B v3-1 model_type: mistral prompt_template: '### System: {system_message} ### User: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Neural Chat 7B v3-1 - AWQ - Model creator: [Intel](https://huggingface.co/Intel) - Original model: [Neural Chat 7B v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) <!-- description start --> ## Description This repo contains AWQ model files for [Intel's Neural Chat 7B v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1). These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - Llama and Mistral models only - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code <!-- description end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-GGUF) * [Intel's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Intel/neural-chat-7b-v3-1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Orca-Hashes ``` ### System: {system_message} ### User: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- README_AWQ.md-provided-files start --> ## Provided files, and AWQ parameters I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered. Models are released as sharded safetensors files. | Branch | Bits | GS | AWQ Dataset | Seq Len | Size | | ------ | ---- | -- | ----------- | ------- | ---- | | [main](https://huggingface.co/TheBloke/neural-chat-7B-v3-1-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1) | 4096 | 4.15 GB <!-- README_AWQ.md-provided-files end --> <!-- README_AWQ.md-text-generation-webui start --> ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui) Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui). It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/neural-chat-7B-v3-1-AWQ`. 3. Click **Download**. 4. The model will start downloading. Once it's finished it will say "Done". 5. In the top left, click the refresh icon next to **Model**. 6. In the **Model** dropdown, choose the model you just downloaded: `neural-chat-7B-v3-1-AWQ` 7. Select **Loader: AutoAWQ**. 8. Click Load, and the model will load and is now ready for use. 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right. 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started! <!-- README_AWQ.md-text-generation-webui end --> <!-- README_AWQ.md-use-from-vllm start --> ## Multi-user inference server: vLLM Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/). - Please ensure you are using vLLM version 0.2 or later. - When using vLLM as a server, pass the `--quantization awq` parameter. For example: ```shell python3 -m vllm.entrypoints.api_server --model TheBloke/neural-chat-7B-v3-1-AWQ --quantization awq --dtype auto ``` - When using vLLM from Python code, again set `quantization=awq`. For example: ```python from vllm import LLM, SamplingParams prompts = [ "Tell me about AI", "Write a story about llamas", "What is 291 - 150?", "How much wood would a woodchuck chuck if a woodchuck could chuck wood?", ] prompt_template=f'''### System: {system_message} ### User: {prompt} ### Assistant: ''' prompts = [prompt_template.format(prompt=prompt) for prompt in prompts] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="TheBloke/neural-chat-7B-v3-1-AWQ", quantization="awq", dtype="auto") outputs = llm.generate(prompts, sampling_params) # Print the outputs. for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}") ``` <!-- README_AWQ.md-use-from-vllm start --> <!-- README_AWQ.md-use-from-tgi start --> ## Multi-user inference server: Hugging Face Text Generation Inference (TGI) Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0` Example Docker parameters: ```shell --model-id TheBloke/neural-chat-7B-v3-1-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 ``` Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later): ```shell pip3 install huggingface-hub ``` ```python from huggingface_hub import InferenceClient endpoint_url = "https://your-endpoint-url-here" prompt = "Tell me about AI" prompt_template=f'''### System: {system_message} ### User: {prompt} ### Assistant: ''' client = InferenceClient(endpoint_url) response = client.text_generation(prompt, max_new_tokens=128, do_sample=True, temperature=0.7, top_p=0.95, top_k=40, repetition_penalty=1.1) print(f"Model output: ", response) ``` <!-- README_AWQ.md-use-from-tgi end --> <!-- README_AWQ.md-use-from-python start --> ## Inference from Python code using Transformers ### Install the necessary packages - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later. - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later. ```shell pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0" ``` Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0. If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command: ```shell pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl ``` If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead: ```shell pip3 uninstall -y autoawq git clone https://github.com/casper-hansen/AutoAWQ cd AutoAWQ pip3 install . ``` ### Transformers example code (requires Transformers 4.35.0 and later) ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_name_or_path = "TheBloke/neural-chat-7B-v3-1-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) model = AutoModelForCausalLM.from_pretrained( model_name_or_path, low_cpu_mem_usage=True, device_map="cuda:0" ) # Using the text streamer to stream output one token at a time streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "Tell me about AI" prompt_template=f'''### System: {system_message} ### User: {prompt} ### Assistant: ''' # Convert prompt to tokens tokens = tokenizer( prompt_template, return_tensors='pt' ).input_ids.cuda() generation_params = { "do_sample": True, "temperature": 0.7, "top_p": 0.95, "top_k": 40, "max_new_tokens": 512, "repetition_penalty": 1.1 } # Generate streamed output, visible one token at a time generation_output = model.generate( tokens, streamer=streamer, **generation_params ) # Generation without a streamer, which will include the prompt in the output generation_output = model.generate( tokens, **generation_params ) # Get the tokens from the output, decode them, print them token_output = generation_output[0] text_output = tokenizer.decode(token_output) print("model.generate output: ", text_output) # Inference is also possible via Transformers' pipeline from transformers import pipeline pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, **generation_params ) pipe_output = pipe(prompt_template)[0]['generated_text'] print("pipeline output: ", pipe_output) ``` <!-- README_AWQ.md-use-from-python end --> <!-- README_AWQ.md-compatibility start --> ## Compatibility The files provided are tested to work with: - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`. - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later. - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later. - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later. <!-- README_AWQ.md-compatibility end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, ้˜ฟๆ˜Ž, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjรคreholt, John Detwiler, Leonard Tan, Iucharbius Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model card: Intel's Neural Chat 7B v3-1 ## Fine-tuning on [Habana](https://habana.ai/) Gaudi2 This model is a fine-tuned model based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the open source dataset [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca). Then we align it with DPO algorithm. For more details, you can refer our blog: [The Practice of Supervised Fine-tuning and Direct Preference Optimization on Habana Gaudi2](https://medium.com/@NeuralCompressor/the-practice-of-supervised-finetuning-and-direct-preference-optimization-on-habana-gaudi2-a1197d8a3cd3). ## Model date Neural-chat-7b-v3-1 was trained between September and October, 2023. ## Evaluation We submit our model to [open_llm_leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), and the model performance has been **improved significantly** as we see from the average metric of 7 tasks from the leaderboard. | Model | Average โฌ†๏ธ| ARC (25-s) โฌ†๏ธ | HellaSwag (10-s) โฌ†๏ธ | MMLU (5-s) โฌ†๏ธ| TruthfulQA (MC) (0-s) โฌ†๏ธ | Winogrande (5-s) | GSM8K (5-s) | DROP (3-s) | | --- | --- | --- | --- | --- | --- | --- | --- | --- | |[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50.32 | 59.58 | 83.31 | 64.16 | 42.15 | 78.37 | 18.12 | 6.14 | | [Intel/neural-chat-7b-v3](https://huggingface.co/Intel/neural-chat-7b-v3) | **57.31** | 67.15 | 83.29 | 62.26 | 58.77 | 78.06 | 1.21 | 50.43 | | [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) | **59.06** | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 | ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-04 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-HPU - num_devices: 8 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2.0 ## Prompt Template ``` ### System: {system} ### User: {usr} ### Assistant: ``` ## Inference with transformers ```shell import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'Intel/neural-chat-7b-v3-1' ) ``` ## Ethical Considerations and Limitations neural-chat-7b-v3-1 can produce factually incorrect output, and should not be relied on to produce factually accurate information. neural-chat-7b-v3-1 was trained on [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) based on [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Therefore, before deploying any applications of neural-chat-7b-v3-1, developers should perform safety testing. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. ## Organizations developing the model The NeuralChat team with members from Intel/SATG/AIA/AIPT. Core team members: Kaokao Lv, Liang Lv, Chang Wang, Wenxin Zhang, Xuhui Ren, and Haihao Shen. ## Useful links * Intel Neural Compressor [link](https://github.com/intel/neural-compressor) * Intel Extension for Transformers [link](https://github.com/intel/intel-extension-for-transformers)
FreedomIntelligence/Apollo-2B
FreedomIntelligence
"2024-04-26T11:11:38Z"
1,401
2
transformers
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:2403.03640", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-03-06T13:05:32Z"
--- license: apache-2.0 --- # Multilingual Medicine: Model, Dataset, Benchmark, Code Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far <p align="center"> ๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> โ€ข๐Ÿ“ƒ <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> โ€ข ๐ŸŒ <a href="https://apollo.llmzoo.com/" target="_blank">Demo</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <br> <a href="./README_zh.md"> ไธญๆ–‡ </a> | <a href="./README.md"> English </p> ![Apollo](assets/apollo_medium_final.png) ## ๐ŸŒˆ Update * **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released. * **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published๏ผ๐ŸŽ‰ * **[2024.01.23]** Apollo repo is published๏ผ๐ŸŽ‰ ## Results ๐Ÿค—<a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B" target="_blank">Apollo-0.5B</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-1.8B" target="_blank">Apollo-1.8B</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B" target="_blank">Apollo-2B</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B" target="_blank">Apollo-6B</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B" target="_blank">Apollo-7B</a> ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-0.5B-GGUF" target="_blank">Apollo-0.5B-GGUF</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-2B-GGUF" target="_blank">Apollo-2B-GGUF</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-6B-GGUF" target="_blank">Apollo-6B-GGUF</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/FreedomIntelligence/Apollo-7B-GGUF" target="_blank">Apollo-7B-GGUF</a> ![Apollo](assets/result.png) ## Usage Format User:{query}\nAssistant:{response}<|endoftext|> ## Dataset & Evaluation - Dataset ๐Ÿค— <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> <details><summary>Click to expand</summary> ![Apollo](assets/dataset.png) - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/blob/main/ApolloCorpus.zip) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus/tree/main/train) - Pretrain: - data item: - json_name: {data_source}_{language}_{data_type}.json - data_type: medicalBook, medicalGuideline, medicalPaper, medicalWeb(from online forum), medicalWiki - language: en(English), zh(chinese), es(spanish), fr(french), hi(Hindi) - data_type: qa(generated qa from text) - data_type==text: list of string ``` [ "string1", "string2", ... ] ``` - data_type==qa: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` - SFT: - json_name: {data_source}_{language}.json - data_type: code, general, math, medicalExam, medicalPatient - data item: list of qa pairs(list of string) ``` [ [ "q1", "a1", "q2", "a2", ... ], ... ] ``` </details> - Evaluation ๐Ÿค— <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <details><summary>Click to expand</summary> - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine </details> ## Results reproduction <details><summary>Click to expand</summary> **Waiting for Update** </details> ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{wang2024apollo, title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People}, author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang}, year={2024}, eprint={2403.03640}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
John6666/3d-rendering-style-beta2-sdxl
John6666
"2024-06-26T23:03:30Z"
1,401
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "3DCG", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2024-06-26T22:58:03Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - 3DCG --- Original model is [here](https://civitai.com/models/431957/3d-rendering-style?modelVersionId=572220).
DMetaSoul/sbert-chinese-general-v1
DMetaSoul
"2024-01-26T09:08:08Z"
1,400
5
sentence-transformers
[ "sentence-transformers", "pytorch", "bert", "feature-extraction", "sentence-similarity", "transformers", "semantic-search", "chinese", "mteb", "zh", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-embeddings-inference", "region:us" ]
sentence-similarity
"2022-03-25T08:49:55Z"
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers - semantic-search - chinese - mteb model-index: - name: sbert-chinese-general-v1 results: - task: type: STS dataset: type: C-MTEB/AFQMC name: MTEB AFQMC config: default split: validation revision: None metrics: - type: cos_sim_pearson value: 22.293919432958074 - type: cos_sim_spearman value: 22.56718923553609 - type: euclidean_pearson value: 22.525656322797026 - type: euclidean_spearman value: 22.56718923553609 - type: manhattan_pearson value: 22.501773028824065 - type: manhattan_spearman value: 22.536992587828397 - task: type: STS dataset: type: C-MTEB/ATEC name: MTEB ATEC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 30.33575274463879 - type: cos_sim_spearman value: 30.298708742167772 - type: euclidean_pearson value: 32.33094743729218 - type: euclidean_spearman value: 30.298710993858734 - type: manhattan_pearson value: 32.31155376195945 - type: manhattan_spearman value: 30.267669681690744 - task: type: Classification dataset: type: mteb/amazon_reviews_multi name: MTEB AmazonReviewsClassification (zh) config: zh split: test revision: 1399c76144fd37290681b995c656ef9b2e06e26d metrics: - type: accuracy value: 37.507999999999996 - type: f1 value: 36.436808400753286 - task: type: STS dataset: type: C-MTEB/BQ name: MTEB BQ config: default split: test revision: None metrics: - type: cos_sim_pearson value: 41.493256724214255 - type: cos_sim_spearman value: 40.98395961967895 - type: euclidean_pearson value: 41.12345737966565 - type: euclidean_spearman value: 40.983959619555996 - type: manhattan_pearson value: 41.02584539471014 - type: manhattan_spearman value: 40.87549513383032 - task: type: BitextMining dataset: type: mteb/bucc-bitext-mining name: MTEB BUCC (zh-en) config: zh-en split: test revision: d51519689f32196a32af33b075a01d0e7c51e252 metrics: - type: accuracy value: 9.794628751974724 - type: f1 value: 9.350535369492716 - type: precision value: 9.179392662804986 - type: recall value: 9.794628751974724 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringP2P name: MTEB CLSClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 34.984726547788284 - task: type: Clustering dataset: type: C-MTEB/CLSClusteringS2S name: MTEB CLSClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 27.81945732281589 - task: type: Reranking dataset: type: C-MTEB/CMedQAv1-reranking name: MTEB CMedQAv1 config: default split: test revision: None metrics: - type: map value: 53.06586280826805 - type: mrr value: 59.58781746031746 - task: type: Reranking dataset: type: C-MTEB/CMedQAv2-reranking name: MTEB CMedQAv2 config: default split: test revision: None metrics: - type: map value: 52.83635946154306 - type: mrr value: 59.315079365079356 - task: type: Retrieval dataset: type: C-MTEB/CmedqaRetrieval name: MTEB CmedqaRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 5.721 - type: map_at_10 value: 8.645 - type: map_at_100 value: 9.434 - type: map_at_1000 value: 9.586 - type: map_at_3 value: 7.413 - type: map_at_5 value: 8.05 - type: mrr_at_1 value: 9.626999999999999 - type: mrr_at_10 value: 13.094 - type: mrr_at_100 value: 13.854 - type: mrr_at_1000 value: 13.958 - type: mrr_at_3 value: 11.724 - type: mrr_at_5 value: 12.409 - type: ndcg_at_1 value: 9.626999999999999 - type: ndcg_at_10 value: 11.35 - type: ndcg_at_100 value: 15.593000000000002 - type: ndcg_at_1000 value: 19.619 - type: ndcg_at_3 value: 9.317 - type: ndcg_at_5 value: 10.049 - type: precision_at_1 value: 9.626999999999999 - type: precision_at_10 value: 2.796 - type: precision_at_100 value: 0.629 - type: precision_at_1000 value: 0.11800000000000001 - type: precision_at_3 value: 5.476 - type: precision_at_5 value: 4.1209999999999996 - type: recall_at_1 value: 5.721 - type: recall_at_10 value: 15.190000000000001 - type: recall_at_100 value: 33.633 - type: recall_at_1000 value: 62.019999999999996 - type: recall_at_3 value: 9.099 - type: recall_at_5 value: 11.423 - task: type: PairClassification dataset: type: C-MTEB/CMNLI name: MTEB Cmnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 77.36620565243535 - type: cos_sim_ap value: 85.92291866877001 - type: cos_sim_f1 value: 78.19390231037029 - type: cos_sim_precision value: 71.24183006535948 - type: cos_sim_recall value: 86.64952069207388 - type: dot_accuracy value: 77.36620565243535 - type: dot_ap value: 85.94113738490068 - type: dot_f1 value: 78.19390231037029 - type: dot_precision value: 71.24183006535948 - type: dot_recall value: 86.64952069207388 - type: euclidean_accuracy value: 77.36620565243535 - type: euclidean_ap value: 85.92291893444687 - type: euclidean_f1 value: 78.19390231037029 - type: euclidean_precision value: 71.24183006535948 - type: euclidean_recall value: 86.64952069207388 - type: manhattan_accuracy value: 77.29404690318701 - type: manhattan_ap value: 85.88284362100919 - type: manhattan_f1 value: 78.17836812144213 - type: manhattan_precision value: 71.18448838548666 - type: manhattan_recall value: 86.69628244096329 - type: max_accuracy value: 77.36620565243535 - type: max_ap value: 85.94113738490068 - type: max_f1 value: 78.19390231037029 - task: type: Retrieval dataset: type: C-MTEB/CovidRetrieval name: MTEB CovidRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 26.976 - type: map_at_10 value: 35.18 - type: map_at_100 value: 35.921 - type: map_at_1000 value: 35.998999999999995 - type: map_at_3 value: 32.763 - type: map_at_5 value: 34.165 - type: mrr_at_1 value: 26.976 - type: mrr_at_10 value: 35.234 - type: mrr_at_100 value: 35.939 - type: mrr_at_1000 value: 36.016 - type: mrr_at_3 value: 32.771 - type: mrr_at_5 value: 34.172999999999995 - type: ndcg_at_1 value: 26.976 - type: ndcg_at_10 value: 39.635 - type: ndcg_at_100 value: 43.54 - type: ndcg_at_1000 value: 45.723 - type: ndcg_at_3 value: 34.652 - type: ndcg_at_5 value: 37.186 - type: precision_at_1 value: 26.976 - type: precision_at_10 value: 5.406 - type: precision_at_100 value: 0.736 - type: precision_at_1000 value: 0.091 - type: precision_at_3 value: 13.418 - type: precision_at_5 value: 9.293999999999999 - type: recall_at_1 value: 26.976 - type: recall_at_10 value: 53.766999999999996 - type: recall_at_100 value: 72.761 - type: recall_at_1000 value: 90.148 - type: recall_at_3 value: 40.095 - type: recall_at_5 value: 46.233000000000004 - task: type: Retrieval dataset: type: C-MTEB/DuRetrieval name: MTEB DuRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 11.285 - type: map_at_10 value: 30.259000000000004 - type: map_at_100 value: 33.772000000000006 - type: map_at_1000 value: 34.037 - type: map_at_3 value: 21.038999999999998 - type: map_at_5 value: 25.939 - type: mrr_at_1 value: 45.1 - type: mrr_at_10 value: 55.803999999999995 - type: mrr_at_100 value: 56.301 - type: mrr_at_1000 value: 56.330999999999996 - type: mrr_at_3 value: 53.333 - type: mrr_at_5 value: 54.798 - type: ndcg_at_1 value: 45.1 - type: ndcg_at_10 value: 41.156 - type: ndcg_at_100 value: 49.518 - type: ndcg_at_1000 value: 52.947 - type: ndcg_at_3 value: 39.708 - type: ndcg_at_5 value: 38.704 - type: precision_at_1 value: 45.1 - type: precision_at_10 value: 20.75 - type: precision_at_100 value: 3.424 - type: precision_at_1000 value: 0.42700000000000005 - type: precision_at_3 value: 35.632999999999996 - type: precision_at_5 value: 30.080000000000002 - type: recall_at_1 value: 11.285 - type: recall_at_10 value: 43.242000000000004 - type: recall_at_100 value: 68.604 - type: recall_at_1000 value: 85.904 - type: recall_at_3 value: 24.404 - type: recall_at_5 value: 32.757 - task: type: Retrieval dataset: type: C-MTEB/EcomRetrieval name: MTEB EcomRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 21 - type: map_at_10 value: 28.364 - type: map_at_100 value: 29.199 - type: map_at_1000 value: 29.265 - type: map_at_3 value: 25.717000000000002 - type: map_at_5 value: 27.311999999999998 - type: mrr_at_1 value: 21 - type: mrr_at_10 value: 28.364 - type: mrr_at_100 value: 29.199 - type: mrr_at_1000 value: 29.265 - type: mrr_at_3 value: 25.717000000000002 - type: mrr_at_5 value: 27.311999999999998 - type: ndcg_at_1 value: 21 - type: ndcg_at_10 value: 32.708 - type: ndcg_at_100 value: 37.184 - type: ndcg_at_1000 value: 39.273 - type: ndcg_at_3 value: 27.372000000000003 - type: ndcg_at_5 value: 30.23 - type: precision_at_1 value: 21 - type: precision_at_10 value: 4.66 - type: precision_at_100 value: 0.685 - type: precision_at_1000 value: 0.086 - type: precision_at_3 value: 10.732999999999999 - type: precision_at_5 value: 7.82 - type: recall_at_1 value: 21 - type: recall_at_10 value: 46.6 - type: recall_at_100 value: 68.5 - type: recall_at_1000 value: 85.6 - type: recall_at_3 value: 32.2 - type: recall_at_5 value: 39.1 - task: type: Classification dataset: type: C-MTEB/IFlyTek-classification name: MTEB IFlyTek config: default split: validation revision: None metrics: - type: accuracy value: 44.878799538283964 - type: f1 value: 33.84678310261366 - task: type: Classification dataset: type: C-MTEB/JDReview-classification name: MTEB JDReview config: default split: test revision: None metrics: - type: accuracy value: 82.1951219512195 - type: ap value: 46.78292030042397 - type: f1 value: 76.20482468514128 - task: type: STS dataset: type: C-MTEB/LCQMC name: MTEB LCQMC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 62.84331627244547 - type: cos_sim_spearman value: 68.39990265073726 - type: euclidean_pearson value: 66.87431827169324 - type: euclidean_spearman value: 68.39990264979167 - type: manhattan_pearson value: 66.89702078900328 - type: manhattan_spearman value: 68.42107302159141 - task: type: Reranking dataset: type: C-MTEB/Mmarco-reranking name: MTEB MMarcoReranking config: default split: dev revision: None metrics: - type: map value: 9.28600891904827 - type: mrr value: 8.057936507936509 - task: type: Retrieval dataset: type: C-MTEB/MMarcoRetrieval name: MTEB MMarcoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 22.820999999999998 - type: map_at_10 value: 30.44 - type: map_at_100 value: 31.35 - type: map_at_1000 value: 31.419000000000004 - type: map_at_3 value: 28.134999999999998 - type: map_at_5 value: 29.482000000000003 - type: mrr_at_1 value: 23.782 - type: mrr_at_10 value: 31.141999999999996 - type: mrr_at_100 value: 32.004 - type: mrr_at_1000 value: 32.068000000000005 - type: mrr_at_3 value: 28.904000000000003 - type: mrr_at_5 value: 30.214999999999996 - type: ndcg_at_1 value: 23.782 - type: ndcg_at_10 value: 34.625 - type: ndcg_at_100 value: 39.226 - type: ndcg_at_1000 value: 41.128 - type: ndcg_at_3 value: 29.968 - type: ndcg_at_5 value: 32.35 - type: precision_at_1 value: 23.782 - type: precision_at_10 value: 4.994 - type: precision_at_100 value: 0.736 - type: precision_at_1000 value: 0.09 - type: precision_at_3 value: 12.13 - type: precision_at_5 value: 8.495999999999999 - type: recall_at_1 value: 22.820999999999998 - type: recall_at_10 value: 47.141 - type: recall_at_100 value: 68.952 - type: recall_at_1000 value: 83.985 - type: recall_at_3 value: 34.508 - type: recall_at_5 value: 40.232 - task: type: Classification dataset: type: mteb/amazon_massive_intent name: MTEB MassiveIntentClassification (zh-CN) config: zh-CN split: test revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7 metrics: - type: accuracy value: 57.343644922663074 - type: f1 value: 56.744802953803486 - task: type: Classification dataset: type: mteb/amazon_massive_scenario name: MTEB MassiveScenarioClassification (zh-CN) config: zh-CN split: test revision: 7d571f92784cd94a019292a1f45445077d0ef634 metrics: - type: accuracy value: 62.363819771351714 - type: f1 value: 62.15920863434656 - task: type: Retrieval dataset: type: C-MTEB/MedicalRetrieval name: MTEB MedicalRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 14.6 - type: map_at_10 value: 18.231 - type: map_at_100 value: 18.744 - type: map_at_1000 value: 18.811 - type: map_at_3 value: 17.133000000000003 - type: map_at_5 value: 17.663 - type: mrr_at_1 value: 14.6 - type: mrr_at_10 value: 18.231 - type: mrr_at_100 value: 18.744 - type: mrr_at_1000 value: 18.811 - type: mrr_at_3 value: 17.133000000000003 - type: mrr_at_5 value: 17.663 - type: ndcg_at_1 value: 14.6 - type: ndcg_at_10 value: 20.349 - type: ndcg_at_100 value: 23.204 - type: ndcg_at_1000 value: 25.44 - type: ndcg_at_3 value: 17.995 - type: ndcg_at_5 value: 18.945999999999998 - type: precision_at_1 value: 14.6 - type: precision_at_10 value: 2.7199999999999998 - type: precision_at_100 value: 0.414 - type: precision_at_1000 value: 0.06 - type: precision_at_3 value: 6.833 - type: precision_at_5 value: 4.5600000000000005 - type: recall_at_1 value: 14.6 - type: recall_at_10 value: 27.200000000000003 - type: recall_at_100 value: 41.4 - type: recall_at_1000 value: 60 - type: recall_at_3 value: 20.5 - type: recall_at_5 value: 22.8 - task: type: Classification dataset: type: C-MTEB/MultilingualSentiment-classification name: MTEB MultilingualSentiment config: default split: validation revision: None metrics: - type: accuracy value: 66.58333333333333 - type: f1 value: 66.26700927460007 - task: type: PairClassification dataset: type: C-MTEB/OCNLI name: MTEB Ocnli config: default split: validation revision: None metrics: - type: cos_sim_accuracy value: 72.00866269626421 - type: cos_sim_ap value: 77.00520104243304 - type: cos_sim_f1 value: 74.39303710490151 - type: cos_sim_precision value: 65.69579288025889 - type: cos_sim_recall value: 85.74445617740233 - type: dot_accuracy value: 72.00866269626421 - type: dot_ap value: 77.00520104243304 - type: dot_f1 value: 74.39303710490151 - type: dot_precision value: 65.69579288025889 - type: dot_recall value: 85.74445617740233 - type: euclidean_accuracy value: 72.00866269626421 - type: euclidean_ap value: 77.00520104243304 - type: euclidean_f1 value: 74.39303710490151 - type: euclidean_precision value: 65.69579288025889 - type: euclidean_recall value: 85.74445617740233 - type: manhattan_accuracy value: 72.1710882512182 - type: manhattan_ap value: 77.00551017913976 - type: manhattan_f1 value: 74.23423423423424 - type: manhattan_precision value: 64.72898664571878 - type: manhattan_recall value: 87.0116156282999 - type: max_accuracy value: 72.1710882512182 - type: max_ap value: 77.00551017913976 - type: max_f1 value: 74.39303710490151 - task: type: Classification dataset: type: C-MTEB/OnlineShopping-classification name: MTEB OnlineShopping config: default split: test revision: None metrics: - type: accuracy value: 88.19000000000001 - type: ap value: 85.13415594781077 - type: f1 value: 88.17344156114062 - task: type: STS dataset: type: C-MTEB/PAWSX name: MTEB PAWSX config: default split: test revision: None metrics: - type: cos_sim_pearson value: 13.70522140998517 - type: cos_sim_spearman value: 15.07546667334743 - type: euclidean_pearson value: 17.49511420225285 - type: euclidean_spearman value: 15.093970931789618 - type: manhattan_pearson value: 17.44069961390521 - type: manhattan_spearman value: 15.076029291596962 - task: type: STS dataset: type: C-MTEB/QBQTC name: MTEB QBQTC config: default split: test revision: None metrics: - type: cos_sim_pearson value: 26.835294224547155 - type: cos_sim_spearman value: 27.920204597498856 - type: euclidean_pearson value: 26.153796707702803 - type: euclidean_spearman value: 27.920971379720548 - type: manhattan_pearson value: 26.21954147857523 - type: manhattan_spearman value: 27.996860049937478 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh) config: zh split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 55.15901259718581 - type: cos_sim_spearman value: 61.57967880874167 - type: euclidean_pearson value: 53.83523291596683 - type: euclidean_spearman value: 61.57967880874167 - type: manhattan_pearson value: 54.99971428907956 - type: manhattan_spearman value: 61.61229543613867 - task: type: STS dataset: type: mteb/sts22-crosslingual-sts name: MTEB STS22 (zh-en) config: zh-en split: test revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80 metrics: - type: cos_sim_pearson value: 34.20930208460845 - type: cos_sim_spearman value: 33.879011104224524 - type: euclidean_pearson value: 35.08526425284862 - type: euclidean_spearman value: 33.879011104224524 - type: manhattan_pearson value: 35.509419089701275 - type: manhattan_spearman value: 33.30035487147621 - task: type: STS dataset: type: C-MTEB/STSB name: MTEB STSB config: default split: test revision: None metrics: - type: cos_sim_pearson value: 82.30068282185835 - type: cos_sim_spearman value: 82.16763221361724 - type: euclidean_pearson value: 80.52772752433374 - type: euclidean_spearman value: 82.16797037220333 - type: manhattan_pearson value: 80.51093859500105 - type: manhattan_spearman value: 82.17643310049654 - task: type: Reranking dataset: type: C-MTEB/T2Reranking name: MTEB T2Reranking config: default split: dev revision: None metrics: - type: map value: 65.14113035189213 - type: mrr value: 74.9589270937443 - task: type: Retrieval dataset: type: C-MTEB/T2Retrieval name: MTEB T2Retrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 12.013 - type: map_at_10 value: 30.885 - type: map_at_100 value: 34.643 - type: map_at_1000 value: 34.927 - type: map_at_3 value: 21.901 - type: map_at_5 value: 26.467000000000002 - type: mrr_at_1 value: 49.623 - type: mrr_at_10 value: 58.05200000000001 - type: mrr_at_100 value: 58.61300000000001 - type: mrr_at_1000 value: 58.643 - type: mrr_at_3 value: 55.947 - type: mrr_at_5 value: 57.229 - type: ndcg_at_1 value: 49.623 - type: ndcg_at_10 value: 41.802 - type: ndcg_at_100 value: 49.975 - type: ndcg_at_1000 value: 53.504 - type: ndcg_at_3 value: 43.515 - type: ndcg_at_5 value: 41.576 - type: precision_at_1 value: 49.623 - type: precision_at_10 value: 22.052 - type: precision_at_100 value: 3.6450000000000005 - type: precision_at_1000 value: 0.45399999999999996 - type: precision_at_3 value: 38.616 - type: precision_at_5 value: 31.966 - type: recall_at_1 value: 12.013 - type: recall_at_10 value: 41.891 - type: recall_at_100 value: 67.096 - type: recall_at_1000 value: 84.756 - type: recall_at_3 value: 24.695 - type: recall_at_5 value: 32.09 - task: type: Classification dataset: type: C-MTEB/TNews-classification name: MTEB TNews config: default split: validation revision: None metrics: - type: accuracy value: 39.800999999999995 - type: f1 value: 38.5345899934575 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringP2P name: MTEB ThuNewsClusteringP2P config: default split: test revision: None metrics: - type: v_measure value: 40.16574242797479 - task: type: Clustering dataset: type: C-MTEB/ThuNewsClusteringS2S name: MTEB ThuNewsClusteringS2S config: default split: test revision: None metrics: - type: v_measure value: 24.232617974671754 - task: type: Retrieval dataset: type: C-MTEB/VideoRetrieval name: MTEB VideoRetrieval config: default split: dev revision: None metrics: - type: map_at_1 value: 24.6 - type: map_at_10 value: 31.328 - type: map_at_100 value: 32.088 - type: map_at_1000 value: 32.164 - type: map_at_3 value: 29.133 - type: map_at_5 value: 30.358 - type: mrr_at_1 value: 24.6 - type: mrr_at_10 value: 31.328 - type: mrr_at_100 value: 32.088 - type: mrr_at_1000 value: 32.164 - type: mrr_at_3 value: 29.133 - type: mrr_at_5 value: 30.358 - type: ndcg_at_1 value: 24.6 - type: ndcg_at_10 value: 35.150999999999996 - type: ndcg_at_100 value: 39.024 - type: ndcg_at_1000 value: 41.157 - type: ndcg_at_3 value: 30.637999999999998 - type: ndcg_at_5 value: 32.833 - type: precision_at_1 value: 24.6 - type: precision_at_10 value: 4.74 - type: precision_at_100 value: 0.66 - type: precision_at_1000 value: 0.083 - type: precision_at_3 value: 11.667 - type: precision_at_5 value: 8.06 - type: recall_at_1 value: 24.6 - type: recall_at_10 value: 47.4 - type: recall_at_100 value: 66 - type: recall_at_1000 value: 83 - type: recall_at_3 value: 35 - type: recall_at_5 value: 40.300000000000004 - task: type: Classification dataset: type: C-MTEB/waimai-classification name: MTEB Waimai config: default split: test revision: None metrics: - type: accuracy value: 83.96000000000001 - type: ap value: 65.11027167433211 - type: f1 value: 82.03549710974653 license: apache-2.0 language: - zh --- # DMetaSoul/sbert-chinese-general-v1 ๆญคๆจกๅž‹ๅŸบไบŽ [bert-base-chinese](https://huggingface.co/bert-base-chinese) ็‰ˆๆœฌ BERT ๆจกๅž‹๏ผŒๅœจ NLIใ€PAWS-Xใ€PKU-Paraphrase-Bankใ€STS ็ญ‰่ฏญไน‰็›ธไผผๆ•ฐๆฎ้›†ไธŠ่ฟ›่กŒ่ฎญ็ปƒ๏ผŒ้€‚็”จไบŽ**้€š็”จ่ฏญไน‰ๅŒน้…**ๅœบๆ™ฏ๏ผˆๆญคๆจกๅž‹ๅœจ Chinese-STS ไปปๅŠกไธŠๆ•ˆๆžœ่พƒๅฅฝ๏ผŒไฝ†ๅœจๅ…ถๅฎƒไปปๅŠกไธŠๆ•ˆๆžœๅนถ้žๆœ€ไผ˜๏ผŒๅญ˜ๅœจไธ€ๅฎš่ฟ‡ๆ‹Ÿๅˆ้ฃŽ้™ฉ๏ผ‰๏ผŒๆฏ”ๅฆ‚ๆ–‡ๆœฌ็‰นๅพๆŠฝๅ–ใ€ๆ–‡ๆœฌๅ‘้‡่š็ฑปใ€ๆ–‡ๆœฌ่ฏญไน‰ๆœ็ดข็ญ‰ไธšๅŠกๅœบๆ™ฏใ€‚ ๆณจ๏ผšๆญคๆจกๅž‹็š„[่ฝป้‡ๅŒ–็‰ˆๆœฌ](https://huggingface.co/DMetaSoul/sbert-chinese-general-v1-distill)๏ผŒไนŸๅทฒ็ปๅผ€ๆบๅ•ฆ๏ผ # Usage ## 1. Sentence-Transformers ้€š่ฟ‡ [sentence-transformers](https://www.SBERT.net) ๆก†ๆžถๆฅไฝฟ็”จ่ฏฅๆจกๅž‹๏ผŒ้ฆ–ๅ…ˆ่ฟ›่กŒๅฎ‰่ฃ…๏ผš ``` pip install -U sentence-transformers ``` ็„ถๅŽไฝฟ็”จไธ‹้ข็š„ไปฃ็ ๆฅ่ฝฝๅ…ฅ่ฏฅๆจกๅž‹ๅนถ่ฟ›่กŒๆ–‡ๆœฌ่กจๅพๅ‘้‡็š„ๆๅ–๏ผš ```python from sentence_transformers import SentenceTransformer sentences = ["ๆˆ‘็š„ๅ„ฟๅญ๏ผไป–็Œ›็„ถ้—ดๅ–Š้“๏ผŒๆˆ‘็š„ๅ„ฟๅญๅœจๅ“ชๅ„ฟ๏ผŸ", "ๆˆ‘็š„ๅ„ฟๅญๅ‘ข๏ผไป–็ช็„ถๅ–Š้“๏ผŒๆˆ‘็š„ๅ„ฟๅญๅœจๅ“ช้‡Œ๏ผŸ"] model = SentenceTransformer('DMetaSoul/sbert-chinese-general-v1') embeddings = model.encode(sentences) print(embeddings) ``` ## 2. HuggingFace Transformers ๅฆ‚ๆžœไธๆƒณไฝฟ็”จ [sentence-transformers](https://www.SBERT.net) ็š„่ฏ๏ผŒไนŸๅฏไปฅ้€š่ฟ‡ HuggingFace Transformers ๆฅ่ฝฝๅ…ฅ่ฏฅๆจกๅž‹ๅนถ่ฟ›่กŒๆ–‡ๆœฌๅ‘้‡ๆŠฝๅ–๏ผš ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ["ๆˆ‘็š„ๅ„ฟๅญ๏ผไป–็Œ›็„ถ้—ดๅ–Š้“๏ผŒๆˆ‘็š„ๅ„ฟๅญๅœจๅ“ชๅ„ฟ๏ผŸ", "ๆˆ‘็š„ๅ„ฟๅญๅ‘ข๏ผไป–็ช็„ถๅ–Š้“๏ผŒๆˆ‘็š„ๅ„ฟๅญๅœจๅ“ช้‡Œ๏ผŸ"] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('DMetaSoul/sbert-chinese-general-v1') model = AutoModel.from_pretrained('DMetaSoul/sbert-chinese-general-v1') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation ่ฏฅๆจกๅž‹ๅœจๅ…ฌๅผ€็š„ๅ‡ ไธช่ฏญไน‰ๅŒน้…ๆ•ฐๆฎ้›†ไธŠ่ฟ›่กŒไบ†่ฏ„ๆต‹๏ผŒ่ฎก็ฎ—ไบ†ๅ‘้‡็›ธไผผๅบฆ่ทŸ็œŸๅฎžๆ ‡็ญพไน‹้—ด็š„็›ธๅ…ณๆ€ง็ณปๆ•ฐ๏ผš | | **csts_dev** | **csts_test** | **afqmc** | **lcqmc** | **bqcorpus** | **pawsx** | **xiaobu** | | ------------ | ------------ | ------------- | --------- | --------- | ------------ | --------- | ---------- | | **spearman** | 84.54% | 82.17% | 23.80% | 65.94% | 45.52% | 11.52% | 48.51% | ## Citing & Authors E-mail: [email protected]
swl-models/xiaolxl-guofeng-v1
swl-models
"2023-02-01T01:00:00Z"
1,400
1
diffusers
[ "diffusers", " stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-01-31T15:43:58Z"
--- license: creativeml-openrail-m language: - en library_name: diffusers pipeline_tag: text-to-image tags: - ' stable-diffusion' - stable-diffusion-diffusers duplicated_from: xiaolxl/Gf_style --- # Gf_style - ไป‹็ป ๆฌข่ฟŽไฝฟ็”จGf_styleๆจกๅž‹ - ่ฟ™ๆ˜ฏไธ€ไธชไธญๅ›ฝๅŽไธฝๅค้ฃŽ้ฃŽๆ ผๆจกๅž‹๏ผŒไนŸๅฏไปฅ่ฏดๆ˜ฏไธ€ไธชๅค้ฃŽๆธธๆˆ่ง’่‰ฒๆจกๅž‹๏ผŒๅ…ทๆœ‰2.5D็š„่ดจๆ„Ÿใ€‚่ฟ™ๆ˜ฏไธ€ไธชๆจกๅž‹็ณปๅˆ—๏ผŒไผšๅœจๆœชๆฅไธๆ–ญๆ›ดๆ–ฐๆจกๅž‹ใ€‚ 2.0็‰ˆๆœฌๅทฒๅ‘ๅธƒ๏ผš[https://huggingface.co/xiaolxl/Gf_style2](https://huggingface.co/xiaolxl/Gf_style2) 3.0็‰ˆๆœฌๅทฒๅ‘ๅธƒ๏ผš[https://huggingface.co/xiaolxl/Gf_style3](https://huggingface.co/xiaolxl/Gf_style3) -- Welcome to Gf_style - This is a model of Chinese gorgeous ancient style, which can also be said to be an ancient game character model, with the effect of 2.5D texture. This is a series of models that will be updated in the future. # install - ๅฎ‰่ฃ…ๆ•™็จ‹ 1. ๅฐ†XXX.saftensorsๆจกๅž‹ๅ’ŒXXX.yamlๆ”พๅ…ฅSD็›ฎๅฝ• - Put the XXX.safetensors model and XXX.yaml into the SD directory 2. ่ฏท่ฎฐไฝ้€‰ๆ‹ฉไปปไฝ•VAEๆ–‡ไปถ๏ผŒๅฆๅˆ™ๅ›พๅฝขๅฐ†ไธบ็ฐ่‰ฒ - Remember to select any VAE file, otherwise the drawing will be gray # How to use - ๅฆ‚ไฝ•ไฝฟ็”จ (TIP:ไบบ็‰ฉๆ˜ฏ็ซ–ๅ›พ็‚ผๅˆถ๏ผŒ็†่ฎบไธŠ็”Ÿๆˆ็ซ–ๅ›พๆ•ˆๆžœๆ›ดๅฅฝ) ๅฆ‚ๆžœๆ‚จๆƒณไฝฟๅ›พ็‰‡ๅฐฝๅฏ่ƒฝๆ›ดๅฅฝ๏ผŒ่ฏทๅฐ่ฏ•ไปฅไธ‹้…็ฝฎ - If you want to make the picture better as possible, please try the following configuration - Sampling steps:**30 or 50** - Sampler:**DDIM** or **(DPM++ 2M Karras, DPM++ SDE Karras)** - These two have different surprises - ่ฟ™ไธคไธชๆœ‰ไธไธ€ๆ ท็š„ๆƒŠๅ–œ - The size of the picture should be at least **768**, otherwise it will collapse - ๅ›พ็‰‡ๅคงๅฐ่‡ณๅฐ‘768๏ผŒไธ็„ถไผšๅดฉๅ›พ - Turn on Hires fix:**R-ESRGAN 4x+ Anime6B** and **Upscale by 2** - If the face is deformed, try to Open **face repair** - **key word(Start):** ``` {best quality}, {{masterpiece}}, {highres}, {an extremely delicate and beautiful}, original, extremely detailed wallpaper, ``` - **Negative words:** ``` NSFW, lowres,bad anatomy,bad hands, text, error, missing fingers,extra digit, fewer digits, cropped, worstquality, low quality, normal quality,jpegartifacts,signature, watermark, username,blurry,bad feet ``` # Examples - ไพ‹ๅ›พ (ๅฏๅœจๆ–‡ไปถๅˆ—่กจไธญๆ‰พๅˆฐๅŽŸๅ›พ๏ผŒๅนถๆ”พๅ…ฅWebUiๆŸฅ็œ‹ๅ…ณ้”ฎ่ฏ็ญ‰ไฟกๆฏ) - (You can find the original image in the file list, and put WebUi to view keywords and other information) Town building map -- ้•‡ๆฅผๅ›พ <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/f1.png> <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/f2.png> Graph generated by keywords in How to use -- How to useไธญ็š„ๅ…ณ้”ฎ่ฏๆ‰€็”Ÿๆˆ็š„ๅ›พ <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/e1.png> <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/e2.png> <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/e3.png> <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/e4.png> Ending figure -- ๆ”ถๅฐพๅ›พ <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/g1.png> <img src=https://huggingface.co/xiaolxl/Gf_style/resolve/main/examples/g2.png>
quantumaikr/open_llama_7b_hf
quantumaikr
"2023-05-03T16:43:07Z"
1,400
3
transformers
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-03T11:22:45Z"
Entry not found
TheBloke/GPlatty-30B-SuperHOT-8K-fp16
TheBloke
"2023-07-09T20:24:57Z"
1,400
1
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-29T11:12:38Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Lilloukas' GPlatty 30B fp16 This is fp16 pytorch format model files for [Lilloukas' GPlatty 30B](https://huggingface.co/lilloukas/GPlatty-30B) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/GPlatty-30B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lilloukas/GPlatty-30B) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/GPlatty-30B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Lilloukas' GPlatty 30B # Information GPlatty-30B is a merge of [lilloukas/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [chansung/gpt4-alpaca-lora-30b](https://huggingface.co/chansung/gpt4-alpaca-lora-30b) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 63.6 | | ARC (25-shot) | 66.0 | | HellaSwag (10-shot) | 84.8 | | TruthfulQA (0-shot) | 53.8 | | Avg. | 67.0 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above. ## Model Details * **Trained by**: Platypus-30B trained by Cole Hunter & Ariel Lee; gpt4-alpaca-lora-30b by chansung. * **Model type:** **GPlatty-30B** is an auto-regressive language model based on the LLaMA transformer architecture. * **Language(s)**: English * **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). | Hyperparameter | Value | |---------------------------|-------| | \\(n_\text{parameters}\\) | 33B | | \\(d_\text{model}\\) | 6656 | | \\(n_\text{layers}\\) | 60 | | \\(n_\text{heads}\\) | 52 | ## Reproducing Evaluation Results Install LM Evaluation Harness: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/GPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly. ## Citations ```bibtex @article{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
GeneZC/MiniMA-2-3B
GeneZC
"2024-03-04T14:38:37Z"
1,400
17
transformers
[ "transformers", "pytorch", "llama", "text-generation", "en", "zh", "dataset:EleutherAI/pile", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:p208p2002/wudao", "arxiv:2311.07052", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-27T03:36:23Z"
--- language: - en - zh license: apache-2.0 library_name: transformers datasets: - EleutherAI/pile - togethercomputer/RedPajama-Data-1T - p208p2002/wudao widget: - text: <s> 4 + 3 = model-index: - name: MiniMA-2-3B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 44.71 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 69.33 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 41.22 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 38.44 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 66.69 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 8.11 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=GeneZC/MiniMA-2-3B name: Open LLM Leaderboard --- ## MiniMA-2-3B ๐Ÿ“‘ [arXiv](https://arxiv.org/abs/2311.07052) | ๐Ÿ‘ป [GitHub](https://github.com/GeneZC/MiniMA) | ๐Ÿค— [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | ๐Ÿค— [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | ๐Ÿค– [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | ๐Ÿค– [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | ๐Ÿค— [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | ๐Ÿค— [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | ๐Ÿค— [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B) ๐Ÿ†• **Updates from MiniMA-3B**: - continued from MiniMA-3B without distillation; - better data mixture; - more trained tokens. โ— Must comply with LICENSE of LLaMA-2 since it is derived from LLaMA-2. A language model continued from MiniMA-3B. Completing the compute-performance pareto frontier together with MiniMA-3B and other arts. <img src="./teaser_a.jpg" alt="teaser_a" width="700" /> **Standard Benchmarks** |Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)| |--|--|--|--|--|--|--|--| |Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49| |ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56| |BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55| |StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99| |Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97| |Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42| |LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10| || |MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11| |MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72| |MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87| |MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13| The following is an example code snippet to use MiniMA-2-3B: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer # MiniMA tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniMA-2-3B", use_fast=False) # GPU. model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval() # CPU. # model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval() prompt = "Question: Sherrie tells the truth. Vernell says Sherrie tells the truth. Alexis says Vernell lies. Michaela says Alexis tells the truth. Elanor says Michaela tells the truth. Does Elanor tell the truth?\nAnswer: No\n\nQuestion: Kristian lies. Sherrie says Kristian lies. Delbert says Sherrie lies. Jerry says Delbert tells the truth. Shalonda says Jerry tells the truth. Does Shalonda tell the truth?\nAnswer: No\n\nQuestion: Vina tells the truth. Helene says Vina lies. Kandi says Helene tells the truth. Jamey says Kandi lies. Ka says Jamey lies. Does Ka tell the truth?\nAnswer: No\n\nQuestion: Christie tells the truth. Ka says Christie tells the truth. Delbert says Ka lies. Leda says Delbert tells the truth. Lorine says Leda tells the truth. Does Lorine tell the truth?\nAnswer:" input_ids = tokenizer([prompt]).input_ids output_ids = model.generate( torch.as_tensor(input_ids).cuda(), do_sample=True, temperature=0.7, max_new_tokens=1024, ) output_ids = output_ids[0][len(input_ids[0]):] output = tokenizer.decode(output_ids, skip_special_tokens=True).strip() # output: "No" ``` ## Bibtex ```bibtex @article{zhang2023law, title={Towards the Law of Capacity Gap in Distilling Language Models}, author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan}, year={2023}, url={https://arxiv.org/abs/2311.07052} } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_GeneZC__MiniMA-2-3B) | Metric |Value| |---------------------------------|----:| |Avg. |44.75| |AI2 Reasoning Challenge (25-Shot)|44.71| |HellaSwag (10-Shot) |69.33| |MMLU (5-Shot) |41.22| |TruthfulQA (0-shot) |38.44| |Winogrande (5-shot) |66.69| |GSM8k (5-shot) | 8.11|
uukuguy/speechless-coder-ds-6.7b
uukuguy
"2024-01-07T06:07:46Z"
1,400
5
transformers
[ "transformers", "pytorch", "gguf", "llama", "text-generation", "code", "en", "dataset:ise-uiuc/Magicoder-OSS-Instruct-75K", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-30T05:30:22Z"
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - ise-uiuc/Magicoder-OSS-Instruct-75K - ise-uiuc/Magicoder-Evol-Instruct-110K tags: - code license: apache-2.0 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: verified: false --- <p><h1> speechless-coder-ds-6.7b </h1></p> [4, 5 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/uukuguy/speechless-coder-ds-6.7b/tree/main/GGUF) Use the following dataset to fine-tune deepseek-ai/deepseek-coder-6.7b in order to improve the model's reasoning and planning abilities. context window length: 8192 max_tokens > 128 && < 8192 > Total 185,193 samples 426 MB - ise-uiuc/Magicoder-OSS-Instruct-75K 75,186 samples - ise-uiuc/Magicoder-Evol-Instruct-110K 110,007 samples 50 samples/T=0.2/MaxTokens=512/Top_P=0.95 Code: https://github.com/uukuguy/speechless ## How to Prompt the Model This model accepts the Alpaca instruction format. For example: ``` You are an intelligent programming assistant. ### Instruction: Implement a linked list in C++ ### Response: ``` ## HumanEval | Metric | Value | | --- | --- | | humaneval-python | | [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 ## BigCode Eval 0.314188 - metrics_humanevalfixtests-cpp: "pass@1": 0.27439024390243905 - metrics_humanevalfixtests-go: "pass@1": 0.23170731707317074 - metrics_humanevalfixtests-java: "pass@1": 0.25609756097560976 - metrics_humanevalfixtests-js: "pass@1": 0.21951219512195122 - metrics_humanevalfixtests-python: "pass@1": 0.23780487804878048 - metrics_humanevalfixtests-rust: "pass@1": 0.13414634146341464 0.390111 - metrics_humanevalsynthesize-cpp: "pass@1": 0.3780487804878049 - metrics_humanevalsynthesize-go: "pass@1": 0.25609756097560976 - metrics_humanevalsynthesize-java: "pass@1": 0.45121951219512196 - metrics_humanevalsynthesize-js: "pass@1": 0.4268292682926829 - metrics_humanevalsynthesize-python: "pass@1": 0.5365853658536586 - metrics_humanevalsynthesize-rust: "pass@1": 0.25 - metrics_mbpp: "pass@1": 0.432 ## LMEval [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC | | | HellaSwag | | | MMLU | | | TruthfulQA | | | Average | |
TheBloke/Platypus-30B-SuperHOT-8K-fp16
TheBloke
"2023-07-09T20:24:56Z"
1,399
2
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-29T09:56:57Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Lilloukas' Platypus 30B fp16 This is fp16 pytorch format model files for [Lilloukas' Platypus 30B](https://huggingface.co/lilloukas/Platypus-30B) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test). [Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`. Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Platypus-30B-SuperHOT-8K-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Platypus-30B-SuperHOT-8K-GGML) * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Platypus-30B-SuperHOT-8K-fp16) * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lilloukas/Platypus-30B) ## How to use this model from Python code First make sure you have Einops installed: ``` pip3 install auto-gptq ``` Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code. The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`. ```python from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline import argparse model_name_or_path = "TheBloke/Platypus-30B-SuperHOT-8K-fp16" use_triton = False tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True) config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True) # Change this to the sequence length you want config.max_position_embeddings = 8192 model = AutoModelForCausalLM.from_pretrained(model_name_or_path, config=config, trust_remote_code=True, device_map='auto') # Note: check to confirm if this is correct prompt template is correct for this model! prompt = "Tell me about AI" prompt_template=f'''USER: {prompt} ASSISTANT:''' print("\n\n*** Generate:") input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda() output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512) print(tokenizer.decode(output[0])) # Inference can also be done using transformers' pipeline print("*** Pipeline:") pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512, temperature=0.7, top_p=0.95, repetition_penalty=1.15 ) print(pipe(prompt_template)[0]['generated_text']) ``` ## Using other UIs: monkey patch Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev. It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest. <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model # Original model card: Lilloukas' Platypus 30B # ๐Ÿฅณ Platypus-30B has arrived! Platypus-30B is an instruction fine-tuned model based on the LLaMA-30B transformer architecture. | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 64.2 | | ARC (25-shot) | 64.6 | | HellaSwag (10-shot) | 84.3 | | TruthfulQA (0-shot) | 45.8 | | Avg. | 64.7 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above. ## Model Details * **Trained by**: Cole Hunter & Ariel Lee * **Model type:** **Platypus-30B** is an auto-regressive language model based on the LLaMA transformer architecture. * **Language(s)**: English * **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). | Hyperparameter | Value | |---------------------------|-------| | \\(n_\text{parameters}\\) | 33B | | \\(d_\text{model}\\) | 6656 | | \\(n_\text{layers}\\) | 60 | | \\(n_\text{heads}\\) | 52 | ## Training Dataset Dataset of highly filtered and curated question and answer pairs. Release TBD. ## Training Procedure `lilloukas/Platypus-30B` was instruction fine-tuned using LoRA on 4 A100 80GB. For training details and inference instructions please see the [Platypus-30B](https://github.com/arielnlee/Platypus-30B.git) GitHub repo. ## Reproducing Evaluation Results Install LM Evaluation Harness: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=lilloukas/Platypus-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly. ## Citations ```bibtex @article{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
Yntec/Dreamsphere
Yntec
"2023-07-24T04:52:53Z"
1,399
5
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "Noosphere", "Dreamlike", "Rainbowpatch", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-14T22:55:17Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - stable-diffusion-diffusers - diffusers - text-to-image - Noosphere - Dreamlike - Rainbowpatch --- Preview image by Digiplay: ![](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/IPh2F_yhhQ6j29hvwuGCl.jpeg) # Dreamsphere A mix of Noosphere v3 by skumerz and my favorite models. Now with the Color 101 VAE baked. Original pages: https://civitai.com/models/36538?modelVersionId=107675 https://civitai.com/models/5528/rainbowpatch
Q-bert/Terminis-7B
Q-bert
"2023-12-20T08:53:13Z"
1,398
8
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-12T06:30:48Z"
--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - merge --- ## Terminis-7B Merge [v1olet/v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B) and [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) using slerp merge. You can use ChatML and Alpaca format. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [Coming soon]() | Metric | Value | |-----------------------|---------------------------| | Avg. | Coming soon | | ARC (25-shot) | Coming soon | | HellaSwag (10-shot) | Coming soon | | MMLU (5-shot) | Coming soon | | TruthfulQA (0-shot) | Coming soon | | Winogrande (5-shot) | Coming soon | | GSM8K (5-shot) | Coming soon |
PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0
PracticeLLM
"2023-12-30T01:45:12Z"
1,398
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "ko", "dataset:kyujinpy/KOR-OpenOrca-Platypus-v3", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-26T18:29:14Z"
--- language: - en - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- # **SOLAR-tail-10.7B-instruct-v1.0** ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Method** Instruction-tuning with [PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0](https://huggingface.co/PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0). **Datasets** datasets: [kyujinpy/KOR-OpenOrca-Platypus-v3](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). **Hyperparameters** ```python python finetune.py \ --base_model PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 \ --data-path kyujinpy/KOR-OpenOrca-Platypus-v3 \ --output_dir ./SOLAR-tail-10.7B-instruct \ --batch_size 64 \ --micro_batch_size 1 \ --num_epochs 1 \ --learning_rate 3e-5 \ --cutoff_len 4096 \ --val_set_size 0 \ --lora_r 16 \ --lora_alpha 16 \ --lora_dropout 0.05 \ --lora_target_modules '[q_proj, k_proj, v_proj, o_proj, gate_proj, down_proj, up_proj, lm_head]' \ --train_on_inputs False \ --add_eos_token False \ --group_by_length False \ --prompt_template_name user_prompt \ --lr_scheduler 'cosine' \ ``` > Platypus repo. # **Model Benchmark** ## Open leaderboard - Follow up as [link](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Ko-CommonGenV2 | | --- | --- | --- | --- | --- | --- | --- | | **PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0** | 51.70 | 46.93 | 58.19 | 53.15 | 46.52 | 53.72 | | PracticeLLM/SOLAR-tail-10.7B-Merge-v1.0 | 48.32 | 45.73 | 56.97 | 38.77 | 38.75 | 61.16 | | jjourney1125/M-SOLAR-10.7B-v1.0 | 55.15 | 49.57 | 60.12 | 54.60 | 49.23 | 62.22 | # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "PracticeLLM/SOLAR-tail-10.7B-instruct-v1.0" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---
anakin87/Llama-3-8b-ita-ties
anakin87
"2024-05-24T21:31:59Z"
1,398
3
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "it", "arxiv:2306.01708", "base_model:swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA", "base_model:DeepMount00/Llama-3-8b-Ita", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-05-17T18:07:47Z"
--- base_model: - swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA - DeepMount00/Llama-3-8b-Ita - meta-llama/Meta-Llama-3-8B library_name: transformers tags: - mergekit - merge license: llama3 language: - it --- # Llama-3-8b-ita-ties This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). I tried to merge two of the best Italian LLMs using Mergekit. The results are acceptable, but I could not improve on the best existing model. ## Evaluation For a detailed comparison of model performance, check out the [Leaderboard for Italian Language Models](https://huggingface.co/spaces/FinancialSupport/open_ita_llm_leaderboard). Here's a breakdown of the performance metrics: | Metric | hellaswag_it acc_norm | arc_it acc_norm | m_mmlu_it 5-shot acc | Average | |:----------------------------|:----------------------|:----------------|:---------------------|:--------| | **Accuracy Normalized** | 0.6621 | 0.5535 | 0.5749 | 0.5968 | ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) as a base. ### Models Merged The following models were included in the merge: * [swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA](https://huggingface.co/swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA) * [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: meta-llama/Meta-Llama-3-8B # no parameters necessary for base model - model: swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA parameters: density: 0.5 weight: 0.6 - model: DeepMount00/Llama-3-8b-Ita parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: meta-llama/Meta-Llama-3-8B parameters: normalize: true dtype: bfloat16 ```
timm/maxvit_base_tf_512.in1k
timm
"2023-05-10T23:58:36Z"
1,397
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2204.01697", "license:apache-2.0", "region:us" ]
image-classification
"2022-12-02T21:49:47Z"
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k --- # Model card for maxvit_base_tf_512.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman. ### Model Variants in [maxxvit.py](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/maxxvit.py) MaxxViT covers a number of related model architectures that share a common structure including: - CoAtNet - Combining MBConv (depthwise-separable) convolutional blocks in early stages with self-attention transformer blocks in later stages. - MaxViT - Uniform blocks across all stages, each containing a MBConv (depthwise-separable) convolution block followed by two self-attention blocks with different partitioning schemes (window followed by grid). - CoAtNeXt - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in CoAtNet. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT - A timm specific arch that uses ConvNeXt blocks in place of MBConv blocks in MaxViT. All normalization layers are LayerNorm (no BatchNorm). - MaxxViT-V2 - A MaxxViT variation that removes the window block attention leaving only ConvNeXt blocks and grid attention w/ more width to compensate. Aside from the major variants listed above, there are more subtle changes from model to model. Any model name with the string `rw` are `timm` specific configs w/ modelling adjustments made to favour PyTorch eager use. These were created while training initial reproductions of the models so there are variations. All models with the string `tf` are models exactly matching Tensorflow based models by the original paper authors with weights ported to PyTorch. This covers a number of MaxViT models. The official CoAtNet models were never released. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 119.9 - GMACs: 138.0 - Activations (M): 704.0 - Image size: 512 x 512 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('maxvit_base_tf_512.in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_512.in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 256, 256]) # torch.Size([1, 96, 128, 128]) # torch.Size([1, 192, 64, 64]) # torch.Size([1, 384, 32, 32]) # torch.Size([1, 768, 16, 16]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'maxvit_base_tf_512.in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 768, 16, 16) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison ### By Top-1 |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| ### By Throughput (samples / sec) |model |top1 |top5 |samples / sec |Params (M) |GMAC |Act (M)| |------------------------------------------------------------------------------------------------------------------------|----:|----:|--------------:|--------------:|-----:|------:| |[coatnext_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnext_nano_rw_224.sw_in1k) |81.95|95.92| 2525.52| 14.70| 2.47| 12.80| |[coatnet_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_nano_rw_224.sw_in1k) |81.70|95.64| 2344.52| 15.14| 2.41| 15.41| |[coatnet_rmlp_nano_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_nano_rw_224.sw_in1k) |82.05|95.87| 2109.09| 15.15| 2.62| 20.34| |[coatnet_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_0_rw_224.sw_in1k) |82.39|95.84| 1831.21| 27.44| 4.43| 18.73| |[coatnet_bn_0_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_bn_0_rw_224.sw_in1k) |82.39|96.19| 1600.14| 27.44| 4.67| 22.04| |[maxvit_rmlp_pico_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_pico_rw_256.sw_in1k) |80.53|95.21| 1594.71| 7.52| 1.85| 24.86| |[maxxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_nano_rw_256.sw_in1k) |83.03|96.34| 1341.24| 16.78| 4.37| 26.05| |[maxvit_rmlp_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_nano_rw_256.sw_in1k) |82.96|96.26| 1283.24| 15.50| 4.47| 31.92| |[maxxvitv2_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxxvitv2_nano_rw_256.sw_in1k) |83.11|96.33| 1276.88| 23.70| 6.26| 23.05| |[maxvit_nano_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_nano_rw_256.sw_in1k) |82.93|96.23| 1218.17| 15.45| 4.46| 30.28| |[maxvit_tiny_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_tiny_rw_224.sw_in1k) |83.50|96.50| 1100.53| 29.06| 5.11| 33.11| |[coatnet_rmlp_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw_224.sw_in1k) |83.36|96.45| 1093.03| 41.69| 7.85| 35.47| |[coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_1_rw2_224.sw_in12k_ft_in1k) |84.90|96.96| 1025.45| 41.72| 8.11| 40.13| |[maxvit_tiny_tf_224.in1k](https://huggingface.co/timm/maxvit_tiny_tf_224.in1k) |83.41|96.59| 1004.94| 30.92| 5.60| 35.78| |[coatnet_1_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_1_rw_224.sw_in1k) |83.62|96.38| 989.59| 41.72| 8.04| 34.60| |[maxvit_rmlp_tiny_rw_256.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_tiny_rw_256.sw_in1k) |84.23|96.78| 807.21| 29.15| 6.77| 46.92| |[maxvit_rmlp_small_rw_224.sw_in1k](https://huggingface.co/timm/maxvit_rmlp_small_rw_224.sw_in1k) |84.49|96.76| 693.82| 64.90| 10.75| 49.30| |[maxvit_small_tf_224.in1k](https://huggingface.co/timm/maxvit_small_tf_224.in1k) |84.43|96.83| 647.96| 68.93| 11.66| 53.17| |[coatnet_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_2_rw_224.sw_in12k_ft_in1k) |86.57|97.89| 631.88| 73.87| 15.09| 49.22| |[coatnet_rmlp_2_rw_224.sw_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in1k) |84.61|96.74| 625.81| 73.88| 15.18| 54.78| |[coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_224.sw_in12k_ft_in1k) |86.49|97.90| 620.58| 73.88| 15.18| 54.78| |[maxxvit_rmlp_small_rw_256.sw_in1k](https://huggingface.co/timm/maxxvit_rmlp_small_rw_256.sw_in1k) |84.63|97.06| 575.53| 66.01| 14.67| 58.38| |[maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.64|98.02| 501.03| 116.09| 24.20| 62.77| |[maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_224.sw_in12k_ft_in1k) |86.89|98.02| 375.86| 116.14| 23.15| 92.64| |[maxvit_base_tf_224.in1k](https://huggingface.co/timm/maxvit_base_tf_224.in1k) |84.85|96.99| 358.25| 119.47| 24.04| 95.01| |[maxvit_tiny_tf_384.in1k](https://huggingface.co/timm/maxvit_tiny_tf_384.in1k) |85.11|97.38| 293.46| 30.98| 17.53| 123.42| |[maxvit_large_tf_224.in1k](https://huggingface.co/timm/maxvit_large_tf_224.in1k) |84.93|96.97| 247.71| 211.79| 43.68| 127.35| |[maxvit_small_tf_384.in1k](https://huggingface.co/timm/maxvit_small_tf_384.in1k) |85.54|97.46| 188.35| 69.02| 35.87| 183.65| |[coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/coatnet_rmlp_2_rw_384.sw_in12k_ft_in1k) |87.39|98.31| 160.80| 73.88| 47.69| 209.43| |[maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxxvitv2_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.47|98.37| 149.49| 116.09| 72.98| 213.74| |[maxvit_tiny_tf_512.in1k](https://huggingface.co/timm/maxvit_tiny_tf_512.in1k) |85.67|97.58| 144.25| 31.05| 33.49| 257.59| |[maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k](https://huggingface.co/timm/maxvit_rmlp_base_rw_384.sw_in12k_ft_in1k) |87.81|98.37| 106.55| 116.14| 70.97| 318.95| |[maxvit_base_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_384.in21k_ft_in1k) |87.92|98.54| 104.71| 119.65| 73.80| 332.90| |[maxvit_base_tf_384.in1k](https://huggingface.co/timm/maxvit_base_tf_384.in1k) |86.29|97.80| 101.09| 119.65| 73.80| 332.90| |[maxvit_small_tf_512.in1k](https://huggingface.co/timm/maxvit_small_tf_512.in1k) |86.10|97.76| 88.63| 69.13| 67.26| 383.77| |[maxvit_large_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_384.in21k_ft_in1k) |87.98|98.56| 71.75| 212.03|132.55| 445.84| |[maxvit_large_tf_384.in1k](https://huggingface.co/timm/maxvit_large_tf_384.in1k) |86.23|97.69| 70.56| 212.03|132.55| 445.84| |[maxvit_base_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_base_tf_512.in21k_ft_in1k) |88.20|98.53| 50.87| 119.88|138.02| 703.99| |[maxvit_base_tf_512.in1k](https://huggingface.co/timm/maxvit_base_tf_512.in1k) |86.60|97.92| 50.75| 119.88|138.02| 703.99| |[maxvit_xlarge_tf_384.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_384.in21k_ft_in1k) |88.32|98.54| 42.53| 475.32|292.78| 668.76| |[maxvit_large_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_large_tf_512.in21k_ft_in1k) |88.04|98.40| 36.42| 212.33|244.75| 942.15| |[maxvit_large_tf_512.in1k](https://huggingface.co/timm/maxvit_large_tf_512.in1k) |86.52|97.88| 36.04| 212.33|244.75| 942.15| |[maxvit_xlarge_tf_512.in21k_ft_in1k](https://huggingface.co/timm/maxvit_xlarge_tf_512.in21k_ft_in1k) |88.53|98.64| 21.76| 475.77|534.14|1413.22| ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{tu2022maxvit, title={MaxViT: Multi-Axis Vision Transformer}, author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao}, journal={ECCV}, year={2022}, } ``` ```bibtex @article{dai2021coatnet, title={CoAtNet: Marrying Convolution and Attention for All Data Sizes}, author={Dai, Zihang and Liu, Hanxiao and Le, Quoc V and Tan, Mingxing}, journal={arXiv preprint arXiv:2106.04803}, year={2021} } ```
Lasorco/lametta
Lasorco
"2023-11-08T07:37:12Z"
1,397
106
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "safetensors", "ja", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2023-03-28T14:29:55Z"
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image - diffusers - safetensors language: - ja --- # ใ“ใฎใƒขใƒ‡ใƒซใฏไฝ•๏ผŸ - ๅ€‹ไบบ็š„ใชๆ™ฎๆฎต้ฃใ„ใฎใŸใ‚ใซใƒžใƒผใ‚ธใ—ใŸใƒขใƒ‡ใƒซใงใ™ใ€็™–ใŒๅผทใ„ใจๆ€ใ„ใพใ™ใ€‚ - ้ ญ่บซไฝŽใ‚ใฎๅฅณใฎๅญใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ใ‚ˆใ†ใซ่ชฟๆ•ดใ—ใฆใ„ใพใ™ใฎใงไป–ใฎใƒขใƒ‡ใƒซใ‚ˆใ‚Šๅนด้ฝขๆ“ไฝœใฎๅŠ ๆธ›ใŒๅฟ…่ฆใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚ใพใŸๅฅณใฎๅญไปฅๅค–ใฎๅ‡บๅŠ›ใซใฏๆœŸๅพ…ใ—ใชใ„ใงใใ ใ•ใ„ใ€‚ - ๏ผˆๅ€‹ไบบ็š„ใ“ใ ใ‚ใ‚Šใง๏ผ‰ใงใใ‚‹ใ ใ‘็›ฎใฎใƒใ‚คใƒฉใ‚คใƒˆใ‚’ๅคฑใ‚ใชใ„ใ‚ˆใ†ใซใƒžใƒผใ‚ธใ—ใฆใ‚ใ‚Šใพใ™ใ€‚ๆŒ‡ใฎๆๅ†™ใซใ‚‚ๆฐ—ใ‚’ไฝฟใฃใŸใคใ‚‚ใ‚Šใงใ™ใŒใƒ—ใƒญใƒณใƒ—ใƒˆๆฌก็ฌฌใง็ฐกๅ˜ใซ็ ด็ถปใ—ใพใ™ใ€‚ - VAEใฏๅค–้ƒจใฎใ‚‚ใฎใ‚’ไฝฟ็”จใ™ใ‚‹ใชใ‚Šใ”่‡ช่บซใง็„ผใ่พผใ‚“ใงใใ ใ•ใ„ใ€‚ใ‚ตใƒณใƒ—ใƒซใฏๅŸบๆœฌAnythingใฎVAEใ‚’ไฝฟ็”จใ—ใฆใ„ใพใ™ใ€‚ๅ€‹ไบบ็š„ใซๆ™ฎๆฎตใฏclearVAEใ‚ทใƒชใƒผใ‚บใ‚’ไฝฟใฃใฆใ„ใพใ™ใ€‚ - ๆ—ขๅญ˜ใฎLoRAใจใฎใƒžใƒƒใƒใƒณใ‚ฐใซใคใ„ใฆใฏๅ€‹ไบบ็š„ใซLoRAใ‚’ใ‚ใพใ‚Šไฝฟใ‚ใชใ„ใŸใ‚ๆœชๆคœ่จผใงใ™ใ€‚ไธŠๆ‰‹ใๅๆ˜ ใ•ใ‚Œใชใ„ใ“ใจใฎใปใ†ใŒๅคšใ„ใงใ—ใ‚‡ใ†ใ€‚ - Samplerใฏไฝ•ใงใ‚‚ๅคงไธˆๅคซใ ใจๆ€ใ„ใพใ™ใŒใ€ใ™ในใฆDPM++ 2M Karrasใง่ชฟๆ•ดใ—ใพใ—ใŸใฎใงๅ›ฐใฃใŸใ‚‰ใใ‚Œใงใ€‚ - Hires.fixใงไธ€้ƒจใฎ่‰ฒๅ‘ณใŒๅŒ–ใ‘ใ‚‹ๅ ดๅˆใฏHires stepsใ‚’0๏ผˆ่‡ชๅ‹•๏ผ‰ใ‹10ไปฅไธŠใฎๆ•ฐๅ€คใ‚’ๅ–ใฃใฆใฟใฆใใ ใ•ใ„ใ€‚๏ผˆlamettaใซ้™ใฃใŸใ“ใจใงใฏใชใ„ใจๆ€ใ„ใพใ™ใŒ๏ผ‰ - ๆŽจๅฅจ๏ผŸใƒ—ใƒญใƒณใƒ—ใƒˆ<br> ใƒ—ใƒญใƒณใƒ—ใƒˆใฏ็Ÿญใ‚ใชๆ–นใŒ็ตๆžœใŒ่‰ฏใ„ใงใ™ใ€‚ใพใšใฏ็Ÿญใ‚ใซๆŒ‡ๅฎšใ—ใฆๅฟ…่ฆใชใ‚‚ใฎใ‚’ไป˜ใ‘่ถณใ—ใฆ่ชฟๆ•ดใ™ใ‚‹ใ‚ˆใ†ใชไฝฟใ„ๆ–นใŒ่‰ฏใ„ใงใ—ใ‚‡ใ†ใ€‚<br> ใ‚ฏใ‚ชใƒชใƒ†ใ‚ฃใ‚ฟใ‚ฐใฏ้›‘ใซๅ‡บๅŠ›ใ—ใฆๆฅฝใ—ใ‚€ๅˆ†ใซใฏๅฟ…ใšใ—ใ‚‚ๅฟ…้ ˆใงใฏใชใ„ใ‚ˆใ†ใซๆ„Ÿใ˜ใพใ™ใ€‚Hires.fixใ™ใ‚‹ใชใ‚‰ใชใใฆใ‚‚่‰ฏใ„ใ‹ใ‚‚๏ผŸ<br> "chibi"ใงใกใณใ‚ญใƒฃใƒฉใŒๅ‡บใฆใฏๆฅใ‚‹ใจๆ€ใ„ใพใ™ใŒไธŠๆ‰‹ใใƒ‡ใƒ•ใ‚ฉใƒซใƒกใ—ใใ‚Œใฆใ„ใชใ„ๆฐ—ใŒใ—ใพใ™ใ€‚<br> LoRAใฏใ‚ญใƒฃใƒฉใ‚‚ใฎใŒ่‹ฆๆ‰‹ใจใฎๅฃฐใ‚’ใŠ่žใใ—ใพใ—ใŸใ€‚ไป–ใฎใƒขใƒ‡ใƒซใจๆฏ”่ผƒใ—ใฆใ‹ใชใ‚Šใƒ‡ใƒ•ใ‚ฉใƒซใƒกใฎๅผทใ„้ก”็ซ‹ใกใงใ™ใ‹ใ‚‰ใŸใ—ใ‹ใซใชใ‚ใจๆ€ใ„ใพใ™ใ€‚<br> LoRA Block Weightใ‚’ๆดป็”จใ—ใฆIN01-02,OUT07-11ใ‚’ใ‚ซใƒƒใƒˆใ™ใ‚‹ใจๅคšๅฐ‘็ทฉๅ’Œใ™ใ‚‹ใ‹ใ‚‚๏ผŸ<br> - ๆŽจๅฅจใƒใ‚ฌใƒ†ใ‚ฃใƒ–ใƒ—ใƒญใƒณใƒ—ใƒˆ<br> "(low quality, worst quality:1.4)"ใ€€ใฏๆŽจๅฅจใงใ™ใŒใƒใ‚ฌใƒ†ใ‚ฃใƒ–TIใชใฉใง็ฝฎใๆ›ใˆใฆใ€ใ‚‚ใ—ใใฏ้‡ใญใฆไฝฟ็”จใ™ใ‚‹ใฎใ‚‚่‰ฏใ„ใจๆ€ใ„ใพใ™ใ€‚<br> TIใฎใŠใ™ใ™ใ‚ใฏ "verybadimagenegative_v1.3"ใ‚„"bad_pictures3"ใจใ‹ใ‚’ๅฎŸ้š›ไฝฟใฃใฆใฟใŸใ‚Šใ—ใฆใ„ใพใ™ใŒใ€ไธ–ใซใ‚ใ‚‹ใ‚‚ใฎใ‚’ๅ…จใฆ็ถฒ็พ…ใงใใฆใ„ใพใ›ใ‚“ใฎใงใ‚‚ใฃใจใ‚ชใ‚นใ‚นใƒกใŒใ‚ใฃใŸใ‚‰ๆ•™ใˆใฆใใ ใ•ใ„ใ€‚<br> ใ‚ขใƒฌใ‚ณใƒฌๆ›ธใ„ใฆใพใ™ใŒ่‡ช็”ฑใซไฝฟใฃใฆใฟใฆ่‰ฏใ„็ตๆžœใŒๅพ—ใ‚‰ใ‚ŒใŸใ‚‰ใ“ใฃใใ‚Šๆ•™ใˆใฆใใ ใ•ใ„ใ€‚<br> - ใชใ‚“ใงใ“ใ‚“ใชใซใ„ใฃใฑใ„ใ‚ใ‚‹ใฎ๏ผŸใฉใ‚Œไฝฟใˆใฐใ„ใ„ใฎ๏ผŸ<br> ใใ‚Œใžใ‚Œใฎ้•ใ„ใฏไพ‹ใˆใ‚‹ใชใ‚‰้ฃฒใฟ็‰ฉใฎใƒ•ใƒฌใƒผใƒใƒผใฎ้•ใ„ใฎใ‚ˆใ†ใชใ‚‚ใฎใชใฎใงใ€ๆ–ฐใ—ใ„้ขจๅ‘ณใŒๅฟ…ใšใ—ใ‚‚ใ‚ใชใŸใฎๅฅฝใฟใซๅˆใ†ใจใฏ้™ใ‚Šใพใ›ใ‚“ใ€‚<br> ๆ–ฐใ—ใ„ใ‚‚ใฎใ‚’็พŽๅ‘ณใ—ใ„ใจๆ„Ÿใ˜ใ‚‹ใ“ใจใ‚‚ใ‚ใ‚Œใฐไปฅๅ‰ใฎใปใ†ใŒใ—ใฃใใ‚Šๆฅใ‚‹ใ“ใจใ‚‚ใ‚ใ‚‹ใงใ—ใ‚‡ใ†ใ—ใ€ใ‚ฑใƒผใ‚นใƒใ‚คใ‚ฑใƒผใ‚นใงไฝฟใ„ๅˆ†ใ‘ใฆใฟใ‚‹ใฎใ‚‚้ข็™ฝใ„ใงใ—ใ‚‡ใ†ใ€‚<br> ่ฟทใฃใŸใ‚‰ๆœ€ๆ–ฐใฎv2012ใ‚’่ฉฆใ—ใฆใฟใฆใญใ€‚<br> - ไปฅๅ‰ใ‚ขใƒƒใƒ—ใ•ใ‚Œใฆใ„ใŸใƒขใƒ‡ใƒซใฏ [lametta_old](https://huggingface.co/Lasorco/lametta_old) ใซ็งปใ—ใฆใ‚ใ‚Šใพใ™ใฎใงใใกใ‚‰ใ‹ใ‚‰ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใฆใใ ใ•ใ„ใ€‚<br> --- # ๅ‡บๅŠ›ไพ‹ ใ‚ตใƒณใƒ—ใƒซใฏๅฐ‘ใ€…ใ‚ฌใƒใƒฃใ‚’ๅ›žใ—ใฆใ ใ„ใŸใ„ไฝœ็”ปๆ„ๅ›ณใซใชใฃใŸใ‚‚ใฎใ‚’ใ‚ใ’ใฆใ„ใพใ™<br> ็ดฐ้ƒจใฎใŠใ‹ใ—ใช็‚นใ‚‚ใ“ใฎใƒขใƒ‡ใƒซใฎ็‰นๆ€งใงใ™ใฎใงใใฎใพใพใฎๆŽฒ่ผ‰ใงใ™<br> ![01924-1419831433.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/U7urJlgwel2kokMV0O3fe.png) **v2012**ใ€€๏ผš v17็ณปใฎๆ”น่‰ฏใƒใƒผใ‚ธใƒงใƒณ <details><summary><b>20xx็ณป่ฉณ็ดฐ</b></summary> ## v2012 v17็ณปใฎๆ”น่‰ฏใ‚’็›ฎๆŒ‡ใ—ใฆใƒžใƒผใ‚ธใ—ใพใ—ใŸใŒใ€v17ใจv19ใ‚’็ตฑๅˆใ—ใŸใƒขใƒ‡ใƒซใจ่จ€ใฃใŸ็ซ‹ใกไฝ็ฝฎใซใชใ‚Šใพใ—ใŸใ€‚๏ผˆv19ใ‚‚v17ใ‚‚ใปใผใŠใ‚“ใชใ˜ใ˜ใ‚ƒใ‚“๏ผใƒใ‚คใใฎ้€šใ‚Šใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“โ€ฆ๏ผ‰<br> ใ„ใคใงใ‚‚ใ ใ„ใŸใ„ไธธใ„็›ฎใฎๅ‡บๅŠ›ใฎใƒขใƒ‡ใƒซใฎใใ‚Œใ‚’่ธใพใˆใคใคๅ‰ใ‚ˆใ‚Šใ‚‚ๅคšๅฐ‘่กจๆƒ…ใŒๅค‰ใ‚ใ‚‹ใ‚ˆใ†ใซใชใฃใŸๆ„Ÿใ˜๏ผˆใ‚’็›ฎๆŒ‡ใ—ใŸใ‚“ใ ใ‘ใฉใใ†ใชใฃใฆใ‚‹ใ‚ˆใญ๏ผŸ๏ผ‰ใงใ™ใ€‚<br> ใจใฏใ„ใˆlamettaใชใฎใงใ ใ„ใŸใ„ใ„ใคใ‚‚้€šใ‚Šใฎ้›ฐๅ›ฒๆฐ—ใฏ็ถ™ๆ‰ฟใ—ใฆใ„ใ‚‹ใจๆ€ใ„ใพใ™ใ€‚<br> ๅ†…ๅŒ…VAEใฏClearVAE Variantใงใ™ใŒใŠๅฅฝใฟใฎVAEใ‚’่จญๅฎšใ—ใฆไฝฟ็”จใ—ใฆใ„ใŸใ ใ„ใฆๅ•้กŒๆœ‰ใ‚Šใพใ›ใ‚“ใ€‚<br> ใƒžใƒผใ‚ธใƒฌใ‚ทใƒ”ใฏ<br> v1745 x v1922 = A<br> Simple ink-prt x A = B<br> CookieCutter Flex v3.5 x A = C<br> B x C = D<br> A x D(tensor merge) = F<br> A x F(cosine) = G <br> v1930 x F = H<br> spekulatius_v1 x v412(modified) = I<br> H x I = J<br> Rabbit_v6 x J = K<br> G x K = v2012<br> <br> ๆ”นใ‚ใฆใƒžใƒผใ‚ธๅฑฅๆญด่ฟฝใฃใฆใฟใŸใ‚‰้šๅˆ†ใ”ใกใ‚ƒใ”ใกใ‚ƒๆททใœใฆใพใ™ใญโ€ฆ<br> lamettaใฎ้ชจๆ ผใซspekulatiusใฎ็ดฐใ‹ใ„่กจ็พใจCookieCutterใฎใ‚ชใƒ–ใ‚ธใ‚งใ‚ฏใƒˆใฎๅคšใ•ใ‚’่ถณใ—ใฆSimple ink-prtใจabbit_v6ใงใ†ใ‚‹ใ•ใใชใ‚Šใ™ใŽใชใ„ใ‚ˆใ†ใซใ—ใŸใ€‚ใจใฏๅพŒไป˜ใ‘ใช่งฃ่ชฌใงใ™ใ‘ใฉใพใ‚ๅคšๅˆ†ใใ‚“ใชๆ„Ÿใ˜ใงใ™ใ€‚<br> ![01907-729192073.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/SP84uFxpwP-6eDlsY2Pa9.png) ``` 1girl,loli,thick eyebrows,black short hair,v-shaped eyebrows,overall,shirt,straw hat,open mouth,waving,looking at viewer,wheat field,cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 729192073, Size: 512x768, Model hash: 8e5e393bdd, Model: lametta_v2012_fp16, Denoising strength: 0.4, Clip skip: 2, Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, Version: v1.6.0 ``` ![01917-1329736539.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/y9_krwiOt4LhML-c2U5bf.png) ``` 1girl,loli,large breasts,smile,short hair,(curly hair:1.1),blue maid costume,lace trim blue thighhighs,maid headdress,lace trim elbow gloves,looking at viewer, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1329736539, Size: 512x768, Model hash: 8e5e393bdd, Model: lametta_v2012_fp16, Denoising strength: 0.4, Clip skip: 2, Hires upscale: 2, Hires upscaler: 4x_BooruGan_650k, Version: v1.6.0 ``` ![01918-4280876389.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/B6shRDoIin82_RXsMOJTv.png) ``` watercolor,pastelcolor,colorful,fairy,fairy wings,flowers,plants,mushroom,light particles, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4280876389, Size: 768x512, Model hash: 8e5e393bdd, Model: lametta_v2012_fp16, Denoising strength: 0.6, Clip skip: 2, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Version: v1.6.0 ``` ใชใ‚“ใ‹ไปŠๅ›žใ‚ตใƒณใƒ—ใƒซใŒClipskip:2ใงใฎๆŽฒ่ผ‰ใงใ™ใ‘ใฉ1ใงใ‚‚ใ‚‚ใกใ‚ใ‚“ๆฅฝใ—ใ‚ใพใ™ใ€‚ </details> <br> --- ![v19xx.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/fNinpu3KP3pl5xt1wpzKx.png) **v1921** ,v1922 ,**v1930**ใ€€๏ผš ใ‚ขใƒ‹ใƒกๅก—ใ‚Šใฃใฝใ„ๅ‡บๅŠ›ใฎใƒขใƒ‡ใƒซ <details><summary><b>19xx็ณป่ฉณ็ดฐ</b></summary> ## v1930 v1921ใ‚’ใƒ™ใƒผใ‚นใซใ—ใฆv1745ใ‚’ใƒžใƒผใ‚ธใ—ใพใ—ใŸใ€‚v1604ใจใƒœใƒ„ใซใ—ใฆ่กจใซๅ‡บใ—ใฆใ„ใชใ„v1810ใ‚‚้š ใ—ๅ‘ณใซๆททใœ่พผใ‚“ใงใ‚ใ‚Šใพใ™ใ€‚<br> ๅ†…ๅŒ…ใ—ใฆใ„ใ‚‹VAEใฏๆ˜”ใƒžใƒผใ‚ธใ—ใฆๅฟ˜ใ‚ŒๅŽปใฃใฆใ„ใŸVAEใงใ™ใ€‚<br> VAEๅ†…ๅŒ…ใฏ็”Ÿๆˆๅˆๅฟƒ่€…ใ•ใ‚“ๅ‘ใ‘ใธใฎๅฏพๅฟœใงใ™ใ€‚ใ“ใ‚ŒใŒๆœ€่‰ฏใจใ„ใ†ใ‚ใ‘ใงใฏใชใ„ใฎใงใŠๅฅฝใฟใฎVAEใ‚’่จญๅฎšใ—ใฆไฝฟใฃใฆใใ ใ•ใ„ใ€‚<br> ![02116-2003955719.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/Ltbhgtcfbk7VcEVtg-Pj9.png) ``` 1girl,loli,hands on own cheek,happy,open mouth,spoken heart,parfait,cafe, Negative prompt: (worst quality, low quality:1.4), Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2003955719, Size: 512x768, Model hash: 95bc5b7f2b, Model: lametta_v1930_fp16, Denoising strength: 0.4, Hires upscale: 2, Hires upscaler: 4x_Valar_v1, Version: v1.6.0 ``` ![02107-2160317488.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/_XHrNQutnnJM0QDKLDoAZ.png) ``` 1girl,huge breasts,:d,(animal kigurumi pajamas:1.2),bedroom, Negative prompt: (worst quality,low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2160317488, Size: 512x768, Model hash: 95bc5b7f2b, Model: lametta_v1930_fp16, Denoising strength: 0.4, Hires upscale: 2, Hires upscaler: 4x-UltraSharp, Version: v1.6.0 ``` ![02157-1020516930.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/n5MnxbuipMjKXdEH8cY3D.png) ``` 1girl,open coat,loli,autumn maple forest,light smile, Negative prompt: verybadimagenegative_v1.3, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1020516930, Size: 768x512, Model hash: 95bc5b7f2b, Model: lametta_v1930_fp16, Denoising strength: 0.7, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.3, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.4, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer version: 23.9.3, Hires upscale: 2, Hires steps: 40, Hires upscaler: Latent (nearest-exact), TI hashes: "verybadimagenegative_v1.3: d70463f87042",Version: v1.6.0 ``` ![00003-3965509510.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/COgdgJtOMVrdevBaiqn-h.png) sketch้ขจใซ้Šในใ‚‹ใƒขใƒ‡ใƒซใจใ„ใ†่ฆๆœ›ใ‚’ใ‚‚ใ‚‰ใฃใŸใฎใงๅฏพๅฟœใ—ใฆใฟใŸใคใ‚‚ใ‚Šใงใ™ใŒใฉใ†ใชใ‚“ใงใ—ใ‚‡ใ†๏ผŸใ‚ˆใใ‚ใ‹ใ‚‰ใชใ„<br> --- ## v1922 v1921ใฎใƒชใƒŸใƒƒใ‚ฏใ‚น็‰ˆใงใ™<br> ใ‚‚ใจใ‚ˆใ‚Šๅ†ใƒžใƒผใ‚ธใ—ใ‚ˆใ†ใจใฏๆ€ใฃใฆใ„ใพใ—ใŸใŒใƒžใƒผใ‚ธๅฑฅๆญดcsvใ‚’ใƒญใ‚นใƒˆใ—ใŸใŸใ‚ๅ…จ้šŽๅฑคๅ†ๆง‹็ฏ‰ใจใชใฃใฆใ„ใพใ™ใ€‚<br> base้ƒจใ‚‚้…ๅˆ†ๅค‰ๆ›ดใ•ใ‚ŒใŸใŸใ‚v1921ใจใฏๅ‡บๅŠ›ใŒ็ตๆง‹ๅค‰ใ‚ใฃใŸใจๆ€ใ„ใพใ™ใŒใฉใ†ใงใ—ใ‚‡ใ†๏ผŸ<br> ใ„ใคใ‚‚้€šใ‚Š1921ใ€1922ใจใ‚‚ใซๅฅฝใฟใฎๆ–นใ‚’ไฝฟใฃใฆใ‚‚ใ‚‰ใˆใŸใ‚‰ใจๆ€ใ„ใพใ™ใ€‚<br> ![00575-842203328.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/p_OVKsjoOal0I1KAvsrkq.png) ``` 1girl,loli,school uniform,autumn leaves,cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 842203328, Size: 512x768, Model hash: 945c2bdaad, Model: lametta_v1922_fp16, Denoising strength: 0.4, Hires upscale: 2, Hires upscaler: R-ESRGAN 4x+ Anime6B, Version: v1.6.0 ``` ![00583-4178983340.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/diw8v1eadQI05DT6V4v9I.png) ``` 1girl,loli,large breasts,angel wings,angel,halo,night,city lights,flying, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4178983340, Size: 512x768, Model hash: 945c2bdaad, Model: lametta_v1922_fp16, Denoising strength: 0.4, Hires upscale: 2, Hires upscaler: 4x_Valar_v1, Version: v1.6.0 ``` ![00571-2476768054.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/so40V0L5DoHSslvNyYW8q.png) ``` 2girls,looking at viewer,outdoors,forest,dappled sunlight,hug, ADDCOMM loli,mint Fishtail braid,mint dress,puffy short sleeves,hair flower,hairband,pointy ears,smile, ADDCOL loli,brown hair,(dark skin:1.2),open mouth,loincloth,navel,Tropical costume, Negative prompt: verybadimagenegative_v1.3, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2476768054, Size: 768x512, Model hash: 945c2bdaad, Model: lametta_v1922_fp16, Denoising strength: 0.4, RP Active: True, RP Divide mode: Matrix, RP Matrix submode: Horizontal, RP Mask submode: Mask, RP Prompt submode: Prompt, RP Calc Mode: Attention, RP Ratios: "1,1", RP Base Ratios: 0.2, RP Use Base: False, RP Use Common: True, RP Use Ncommon: False, RP Change AND: False, RP LoRA Neg Te Ratios: 0, RP LoRA Neg U Ratios: 0, RP threshold: 0.4, RP LoRA Stop Step: 0, RP LoRA Hires Stop Step: 0, RP Flip: False, Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, TI hashes: "verybadimagenegative_v1.3: d70463f87042", Version: v1.6.0 ``` โ€ปใ„ใคใ‚‚ๅ‡บๅŠ›ใƒ†ใ‚นใƒˆใซไป˜ใๅˆใฃใฆใ‚‚ใ‚‰ใฃใฆใ„ใ‚‹ใ‚ญใƒฃใƒฉใ‚ฏใ‚ฟใƒผใงใ™ --- ## v1921 ไปฅๅ‰ใ‹ใ‚‰ใฎไฝ•ใจ่จ€ใ†ใ‹2.25ๆฌกๅ…ƒ๏ผŸใฃใฝใ„ใ‚ˆใ†ใชๅก—ใ‚Šใงใฏใชใใ€ใ‚‚ใ†ใกใ‚‡ใ„ใ‚ขใƒ‹ใƒกๅก—ใ‚Šใฃใฝใ„ใฎใŒใปใ—ใ„ใชใ‚ใจๅ‰ใ€…ใ‹ใ‚‰ๆ€ใฃใฆใ„ใพใ—ใŸใ€‚<br> ใ‚ใ‚‹ๆ™‚ใƒ•ใƒฉใƒƒใƒˆใงใ‚ขใƒ‹ใƒกใชใƒขใƒ‡ใƒซใ‚’ใƒžใƒผใ‚ธใ•ใ‚Œใฆใ„ใ‚‹ๆ–นใ‹ใ‚‰ใ”ๅŽšๆ„ใงใใฎใƒขใƒ‡ใƒซใ‚’ๆไพ›ใใ ใ•ใ‚Š๏ผˆๆœฌๅฝ“ใซๆ„Ÿ่ฌ๏ผ‰ใ€ใใฎๅก—ใ‚Šใ‚’ๅ…ƒใซใ—ใฆใ‚ขใƒฌใ‚ณใƒฌใ—ใŸใฎใŒไปŠๅ›žใงใ™ใ€‚<br> ๆฌฒๅผตใฃใฆใ„ใŸใ‚‰่ชฟๆ•ดใŒ้›ฃ่ˆชใ—ใฆใ—ใพใ„ใพใ ็…ฎ่ฉฐใ‚่ถณใ‚‰ใšใชๆฐ—ใ‚‚ใ—ใฆใ„ใพใ™ใฎใงใŠใŠใ‚‰ใ‹ใซๆฅฝใ—ใ‚“ใงใ„ใŸใ ใ‘ใŸใ‚‰ใจๆ€ใ„ใพใ™ใ€‚๏ผˆใ‚ดใƒกใƒณใญ๏ผ๏ผ‰<br> ็ด ใฎๅ‡บๅŠ›ใงใฏไปฅๅ‰ใจๅค‰ๅŒ–ใŒไนใ—ใ„ๆ„Ÿใ‚‚ใ‚ใ‚Šใพใ™ใฎใงใ‚ขใƒ‹ใƒก็ณปใฎใ‚ขใƒƒใƒ—ใ‚นใ‚ฑใƒผใƒฉใƒผใงHires.fixใ—ใฆไฝฟใฃใฆใใ ใ•ใ„ใ€‚ใ‚ตใƒณใƒ—ใƒซใ‚‚Hiresใ—ใฆใฎใ‚‚ใฎใซใชใ‚Šใพใ™ใ€‚<br> ใพใŸไปŠๅ›žใฏVAE๏ผˆClearVAE Variant๏ผ‰ใ‚’ๅ†…ๅŒ…ใ•ใ›ใฆใฟใพใ—ใŸใ€‚ใ‚‚ใกใ‚ใ‚“ใŠๅฅฝใฟใฎVAEใ‚’่จญๅฎšใ—ใฆไฝฟ็”จใ—ใฆใ„ใŸใ ใ„ใฆๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€‚<br> ไปŠๅ›žไฝฟ็”จใ—ใŸใƒขใƒ‡ใƒซใฏ - S-flat-nullpo-testBBB4 @nullpox - NuipeniMix ver.2 @McSionnaigh - WateryAbyss @The_Missing_Models - lametta_v1745,v1605,1604 S-flat-nullpo-testBBB4ใ‹ใ‚‰ๅก—ใ‚Šใ‚’ไธญๅฟƒใซไธปใซOUTๅฑคใ‚’ใ€NuipeniMix ver.2ใ‹ใ‚‰ใฏTextEncoderใ‚’ใกใ‚‡ใฃใจใคใพใพใ›ใฆใ‚‚ใ‚‰ใ„ใ€WateryAbyssใ‹ใ‚‰TextEncoderใจOUT7-11ไป˜่ฟ‘ใ‚’้š ใ—ๅ‘ณ็จ‹ๅบฆใซใ‚‚ใ‚‰ใฃใฆใใพใ—ใŸใ€‚<br> ็‰นใซS-flat-nullpo-testBBB4ใฏ้ŽๅŽปใฎlamettaใจใ‹ใ‘ๅˆใ‚ใ›ใŸใ‚‚ใฎใ‚’ๅคš้‡ใƒžใƒผใ‚ธใ—ใฆใ‚ใ‚‹ใฎใงใ“ใฎใƒขใƒ‡ใƒซใŒไปŠๅ›žใฎใ‚ญใƒผใซใชใ‚Šใพใ™ใ€‚<br> ![02196-390773643.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/kmJgMsIH0xIYWxfc29shH.png) ``` 1girl,large breasts,short hair,small breasts,sailor dress,sailor hat,happy,smile,open mouth,skin fang,dappled sunlight, Negative prompt: verybadimagenegative_v1.3,covered navel, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 390773643, Size: 512x768, Model hash: 20aa249203, Model: lametta_v1921_fp16, Denoising strength: 0.4, Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, Version: v1.6.0 ``` โ€ปๅพŒใง่ฆ‹ใŸใ‚‰ใŠ่ƒธใฎๅคงใใ•LargeใจSmallใง2ๅ›žๅ”ฑใˆใฆใฆใƒ€ใƒกใ ใฃใŸ ![02218-2410852180.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/oaNLvS1K3LmBtuFS7Uw98.png) ``` watercolor,pastelcolor,colorful,fairy,fairy wings,flowers,plants,mushroom,light particles, Negative prompt: (worst quality:1.4),(low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2410852180, Size: 512x768, Model hash: 20aa249203, Model: lametta_v1921_fp16, Denoising strength: 0.6, ADetailer model: face_yolov8n.pt, ADetailer confidence: 0.4, ADetailer dilate/erode: 4, ADetailer mask blur: 4, ADetailer denoising strength: 0.5, ADetailer inpaint only masked: True, ADetailer inpaint padding: 32, ADetailer use separate steps: True, ADetailer steps: 46, ADetailer model 2nd: hand_yolov8n.pt, ADetailer confidence 2nd: 0.5, ADetailer dilate/erode 2nd: 4, ADetailer mask blur 2nd: 4, ADetailer denoising strength 2nd: 0.6, ADetailer inpaint only masked 2nd: True, ADetailer inpaint padding 2nd: 32, ADetailer version: 23.9.1, Hires upscale: 2, Hires upscaler: Latent (nearest-exact), Version: v1.6.0 ``` ![02199-2269500953.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/eF0i9x799AMqaWXEO_O0E.png) ``` 1girl,loli,rabbit girl,rabbit ears,all fours,happy,open mouth,outdoors,floral background,pink flower field,looking at viewer, Negative prompt: (verybadimagenegative_v1.3:0.8), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2269500953, Size: 768x512, Model hash: 20aa249203, Model: lametta_v1921_fp16, Denoising strength: 0.4, Hires upscale: 2, Hires upscaler: 4x-UltraSharp, TI hashes: "verybadimagenegative_v1.3: d70463f87042", Version: v1.6.0 ``` </details> <br> --- ![v17xx.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/cpVdPSIkAzTnb0_OvesZv.png) **v1745** ,**v1721** , v1720 ๏ผš v13ใ€v15็ณปๅˆ—ใฎๆ”น่‰ฏๅž‹ใ‚’็›ฎๆŒ‡ใ—ใŸใƒขใƒ‡ใƒซ <details><summary><b>17xx็ณป่ฉณ็ดฐ</b></summary> ## v1745 ๅค‰ๅŒ–ใŒใปใ—ใใฆๅคใ„lamettaใจToraFurryMix v2.0ใŒ้š ใ—ๅ‘ณใจใ—ใฆๆททใœใฆใ‚ใ‚Šใพใ™ใ€‚<br> ไฝ•ใŒๅค‰ใ‚ใฃใŸใฎ๏ผŸใจ่จ€ใ‚ใ‚Œใ‚‹ใจ็ญ”ใˆใซๅ›ฐใ‚‹ใจใ“ใ‚ใงใฏใ‚ใ‚Šใพใ™ใŒใ€Hires.fixๆ™‚ใฎๆŒ‡ใฎ็ ด็ถปใฏๅฐ‘ใชใ‚ใ‹ใ‚‚ใ—ใ‚Œใพใ›ใ‚“ใ€‚<br> ใƒขใƒ‡ใƒซใฎ่ชฟๆ•ดใฏไฝ•ใ‹ใ‚’ๅพ—ๆ„ใซใ™ใ‚‹ใจไฝ•ใ‹ใŒไธๅพ—ๆ‰‹ใซใชใ‚Šใพใ™ใ€‚ๆ–ฐใ—ใ„ใƒขใƒ‡ใƒซใŒๅฟ…ใšใ—ใ‚‚่‰ฏใ„ใจใฏ้™ใ‚‰ใชใ„ใงใ™ใฎใงใƒ•ใ‚ฃใƒผใƒชใƒณใ‚ฐใซๅˆใ†ใƒขใƒ‡ใƒซใ‚’ๆŽก็”จใ—ใฆใใ ใ•ใ„ใ€‚<br> Hires.fixๆŽจๅฅจใงใ™ใ€‚<br> ![00546-2422261728.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/q7x8-QUOoDNYYYysNT4ok.png) ``` best quality, detailed cg ,1girl,(loli:1.2),frilled camisole,pink short hair,wavy hair,pink twintails,ahoge, (skin fang:0.9), open mouth,park bench, looking at viewer, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2422261728, Size: 512x768, Model hash: 0d13d0d3a4, Model: lametta_v1745_fp16, Version: v1.5.1 ``` ![00547-4071717840.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/Olp6T62624UbIdoN_RU1J.png) ``` best quality, detailed cg, 1girl, large breasts, cleavage, sheep girl, sheep ears, elbow gloves, green eyes, circlet, happy, open mouth, sweat, dappled sunlight, cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4071717840, Size: 512x768, Model hash: 0d13d0d3a4, Model: lametta_v1745_fp16, Version: v1.5.1 ``` ![00565-967433583.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/DiDu0jqP5bfGjpZYnWHxM.png) ``` best quality,detailed cg,1girl,loli,moon,night,reading book, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 967433583, Size: 768x512, Model hash: 0d13d0d3a4, Model: lametta_v1745_fp16, Version: v1.5.1 ``` --- ## v1721 v1720ใฎๆ›ดใซๆ”น่‰ฏ็‰ˆ๏ผŸใงใ™ใ€‚<br> ๅ…จไฝ“็š„ใชใƒžใƒผใ‚ธๆฏ”็އใ‚’่ฆ‹็›ดใ—ใ—ใฆใ‚‚ใ†ๅฐ‘ใ—่จ€ใ†ใ“ใจใ‚’ๅŠนใใ‚„ใ™ใใ—ใฆใฟใพใ—ใŸใ€‚<br> ็ด ๆใฏไธ€็ท’ใชใฎใงใ‚ใพใ‚Šๅค‰ใ‚ใ‚‰ใชใ„ใจใ‚‚่จ€ใˆใ‚‹ใ—ใ€CLIP้ƒจๅˆ†ใซใ‚‚ๆ‰‹ใ‚’ๅ…ฅใ‚ŒใŸใฎใง็ตๆง‹ๅค‰ใ‚ใฃใŸใจใ‚‚ใ€‚<br> ใ‚„ใฏใ‚ŠHires.fixใ—ใฆไฝฟ็”จใ™ใ‚‹่ชฟๆ•ดใงใ™<br> ![00318-3790556145.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/DF0KsweRfTEJ16eLETJa4.png) ``` best quality, detailed cg, 1girl,loli,happy, smile,open mouth,pink sundress, cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3790556145, Size: 512x768, Model hash: e5edfc60bb, Model: lametta_v1721_fp16, Version: v1.5.1 ``` ![00312-2279767147.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/NY5drJoeay4KEqAif4hMc.png) ``` best quality, detailed cg, 1girl, (dark skin:1.4), large breasts, cleavage, elf, holding harp, elbow gloves, green eyes, circlet, sweat, dappled sunlight, cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2279767147, Size: 512x768, Model hash: e5edfc60bb, Model: lametta_v1721_fp16, Version: v1.5.1 ``` ![00306-3476143409.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/bT9zvZVesO4PzCHLUZycW.png) ``` best quality, detailed cg, 1girl, loli, rabbit girl, white hair, blue moon, night sky, cowboy shot, Negative prompt: bad anatomy, (worst quality, low quality:1.4), nsfw, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3476143409, Size: 768x512, Model hash: e5edfc60bb, Model: lametta_v1721_fp16, Version: v1.5.1 ``` --- ## v1720 v13ใจv15็ณปใฎ้–“ใ‚’ๅ–ใ‚Šใคใคๅ‡บๅŠ›ใฎๆฑŽ็”จๆ€งใ‚ขใƒƒใƒ—ใ‚’็›ฎๆŒ‡ใ—ใŸใƒขใƒ‡ใƒซใงใ™ใ€‚lamettaใฎ็™–ใ‚’ๅฐ‘ใ—ใ ใ‘่–„ใ‚ใฆๆ‰ฑใ„ใ‚„ใ™ใใ—ใŸๆ„Ÿใ˜ใงใ—ใ‚‡ใ†ใ‹ใ€‚<br> v15็ณปใงใฏHires.fixใ—ใŸๆ™‚ใซใพใคๆฏ›ใŒใ†ใ‚‹ใ•ใใชใ‚Šใ™ใŽใ‚‹ใใ‚‰ใ„ใŒใ‚ใ‚Šใพใ—ใŸใŒv17ใงใฏใ‚ใฃใ•ใ‚Šใ‚ๅ‚พๅ‘ใงใ™ใ€‚<br> ็›ฎใ‚‚ใ‚„ใ‚„ๅฐใ•ใ‚ใซใƒใƒฉใƒณใ‚นใ‚ˆใ๏ผŸใจใ—ใฆใ„ใพใ™ใฎใงๅฟ…่ฆใซๅฟœใ˜ใฆ"big eyes"ใ‚„LoRAใง่ฃœใฃใฆใใ ใ•ใ„ใ€‚<br> ใ‚ตใƒณใƒ—ใƒซใฏ็ด ใฎๅ‡บๅŠ›ใงใ™ใŒใ€ๅŸบๆœฌ็š„ใซใฏHires.fixใ—ใฆไฝฟ็”จใ™ใ‚‹่ชฟๆ•ดใจใ—ใฆใ‚ใ‚Šใพใ™ใ€‚<br> ![00786-3781391533.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/3bmJshQJNpaqA67khHoN1.png) ``` best quality, detailed cg, 1girl, twin braid, loli, huge breasts, happy, smile, open mouth, pinafore dress, cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3781391533, Size: 512x768, Model hash: 34065c40e3, Model: lametta_v1720_fp16, Version: v1.5.1 ``` ![00891-2382167223.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/B7n4oEHIDp7dP62qGpEt0.png) ``` best quality, detailed illustration, 1girl, (loli:1.2), sleeveless dress, cowboy shot, night, cityscape, from above, starry sky, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2382167223, Size: 512x768, Model hash: 34065c40e3, Model: lametta_v1720_fp16, Version: v1.5.1 ``` ![00880-1722069721.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/FB_fdYwJhLDDl4ne9obUl.png) ``` best quality, detailed cg, 1girl, smile, mint hair, (parfait:1.2), mint color, blue cream, mint chocolate chip, Negative prompt: bad anatomy, (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1722069721, Size: 768x512, Model hash: 34065c40e3, Model: lametta_v1720_fp16, Version: v1.5.1 ``` </details> <br> --- ![16xx.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/uoI3cMves6dnnV5lPCHHR.png) v1601 , **v1602** , **v1604** , **v1605**๏ผšใƒ‡ใƒ•ใ‚ฉใƒซใƒกใƒใƒƒใ‚ฏใช็ตตใ‚’ๅ‡บๅŠ›ใ™ใ‚‹ๆ–นๅ‘ๆ€งใงใ™ <details><summary><b>16xx็ณป่ฉณ็ดฐ</b></summary> ## v1605 v1574ใ‚’ใƒ™ใƒผใ‚นใซใ—ใฆCookieCutter Flexใ‚’ใƒžใƒผใ‚ธใ—ใพใ—ใŸใ€‚<br> ใ‚ˆใ‚Šanimeใฃใฝใใชใ‚Šใ‚ˆใ‚Š้ ญ่บซใŒไธ‹ใŒใฃใŸใใ‚“ใชๆ„Ÿใ˜ใฎใƒขใƒ‡ใƒซใงใ™ใ€‚<br> ๅ€‹ไบบ็š„ใซ "thick eyebrows, v-shaped eyebrows" ใŒใ‚ˆใไผผๅˆใ†ใฎใงใฏใชใ„ใ‹ใจๆ€ใ„ใพใ™ใ€‚<br> ๆๅ†™ใŒ็”˜ใ„็‚นใฏHires.fixใซใฆ่งฃๆฑบใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚<br> ![01174-2142905500.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/mHhBxLg2V4cmkxOYErPN-.png) ``` best quality, detailed cg, 1girl, (loli:1.2), thick eyebrows, black short hair, (v-shaped eyebrows:0.9), cowboy shot, happy, smile, sleeveless pink dress, outdoors, forest, from above, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2142905500, Size: 512x768, Model hash: de7db98725, Model: lametta_v1605.fp16, Version: v1.4.1 ``` ![01172-581597326.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/YRGPxbytAmGCaV-mqZq4a.png) ``` best quality, detailed illustration, loli, sheep girl, grin, sheep ears, standing, wavy short hair, outdoors, farm, cowboy shot, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 581597326, Size: 512x768, Model hash: de7db98725, Model: lametta_v1605.fp16, Version: v1.4.1 ``` ![01173-3145055862.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/iEkopra1472FIf6nFatQg.png) ``` best quality, detailed cg, 2girls, symmetrical, (animal kigurumi pajamas:1.2), (loli:1.2), twintail, blonde hair, cowboy shot, smile, night, bedroom, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3145055862, Size: 768x512, Model hash: de7db98725, Model: lametta_v1605.fp16, Version: v1.4.1 ``` --- ## v1604 v1601ใฎใƒ™ใƒผใ‚นใ‚’v1574ใธๅทฎใ—ๆ›ฟใˆใจใจใ‚‚ใซใƒžใƒผใ‚ธๆฏ”็އใ‚’่ฆ‹็›ดใ—ใŸใ‚‚ใฎใงใ™ใ€‚<br> v16xxใจใ„ใ†ใ‚ˆใ‚Šใฏใ‚ขใƒ‹ใƒกๅก—ใ‚ŠใฃใฝใใชใฃใŸv15xxใฟใŸใ„ใชๆ„Ÿใ˜ใซใชใ‚Šใพใ—ใŸใ€‚<br> ไพ‹ใซใ‚ˆใฃใฆAnythingใฎVAEใซใ‚ˆใ‚‹ๅ‡บๅŠ›ใ‚ตใƒณใƒ—ใƒซใงใ™ใŒใ€clearVAE_V1.1ใชใฉใฎใปใ†ใŒๅฅฝ็ตๆžœใซใชใ‚‹ใจๆ€ใ„ใพใ™ใ€‚<br> ใ‚ใ‚Œ...็ตๅฑ€16ใ‚ทใƒชใƒผใ‚บใฏๆ‹‡ๆŒ‡ๅง‘ๅจ˜v2.0ใƒžใƒผใ‚ธใ‚ทใƒชใƒผใ‚บใชใ‚“ใ˜ใ‚ƒ...<br> ![00940-1818502218.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/q9lmRFzc-MoQ0QF-oQg4Z.png) ``` best quality, detailed cg, 1girl, smile, (loli:0.8), kimono maid, holding tray, Negative prompt: (worst quality, low quality:1.4), Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1818502218, Size: 512x768, Model hash: ea9dc7d27b, Model: lametta_v1604_fp16, Version: v1.3.2 ``` ![00944-468116084.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/GZOiiITwrIJAmqjPGtK6P.png) ``` best quality, detailed illustration, (loli:1.2),rabbit girl, sleeveless polka dot dress, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 468116084, Size: 512x768, Model hash: ea9dc7d27b, Model: lametta_v1604_fp16, Version: v1.3.2 ``` ![00935-528650716.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/ffYOmKaXGYmLkOb3UZ1Gz.png) ``` best quality, detailed illustration,1girl,solo,alice \(alice in wonderland\), (loli:1.2),blonde hair, hair ribbon, frilled dress, frilled skirt, frilled sleeves, blue eyes, very long hair,castle background, Negative prompt: bad anatomy,(low quality, worst quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 528650716, Size: 768x512, Model hash: ea9dc7d27b, Model: lametta_v1604_fp16, Version: v1.3.2 ``` --- ## v1602 v1601ใฎใƒžใƒผใ‚ธๆฏ”็އใจ็ด ๆใ‚’่ฆ‹็›ดใ—ใฆๆ›ดใซใƒ‡ใƒ•ใ‚ฉใƒซใƒกๆ„Ÿใ‚’ใ‚ขใƒƒใƒ—ใ•ใ›ใพใ—ใŸ<br> ใชใ‚“ใ ใ‹ไปฅๅ‰ใฎlamettaใฃใฝใ•ใŒใชใใชใฃใŸใ‚ˆใ†ใช๏ผŸ "detail eyes"ใ‚’ๅ”ฑใˆใ‚‹ใจlamettaใฎ้บไผๅญใ‚’ๅฐ‘ใ—ๆ€ใ„ๅ‡บใ™ใ‹ใ‚‚<br> ๅŒใ˜SEEDใงใ‚‚Sampling stepsใชใฉใฎๅ‡บๅŠ›ใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใฉใ‚“ใฉใ‚“็ดฐ้ƒจใŒๅค‰ใ‚ใ‚Šใพใ™๏ผˆๆ‹‡ๆŒ‡ๅง‘ๅจ˜v2.0ใƒžใƒผใ‚ธใ—ใŸใ‚‚ใฎใฏใใ†ใชใ‚Šใ‚„ใ™ใ„ใ‚ˆใ†ใช๏ผŸ๏ผ‰<br> ๆ‰‹่ถณใ‚„่ƒŒๆ™ฏใฎ็ ด็ถปใฏใƒ‘ใƒฉใƒกใƒผใ‚ฟใฎ่ฆ‹็›ดใ—ใ‚„Hires.fixใซใฆ่งฃๆฑบใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚<br> ![01250-2089126768.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/E0_8lU2zfdsoywrL5j3MU.png) ``` best quality, detailed illustration, 1girl, (loli:1.2), sleeveless dress, cowboy shot, night, starry sky, cityscape, chain-link fence, from above, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2089126768, Size: 512x768, Model hash: a355fdc3d9, Model: lametta_v1602_fp16, Denoising strength: 0.5, Hires upscale: 1.5, Hires steps: 8, Hires upscaler: 4x_fatal_Anime_500000_G, Version: v1.4.1 ``` ![01254-3089771647.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/hGpe9z1OuW_kXGGf6iUT4.png) ``` best quality, detailed cg, (loli:1.2), full body, bob cut, gently smile, closed mouth, little red riding hood girl, picnic basket, over knee socks, brown lace-up boots, brown corset,looking at viewer, out door, dappled sunlight, Negative prompt: (worst quality, low quality:1.4), Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3089771647, Size: 768x512, Model hash: a355fdc3d9, Model: lametta_v1602_fp16, Denoising strength: 0.5, Hires upscale: 1.5, Hires steps: 8, Hires upscaler: 4x_fatal_Anime_500000_G, Version: v1.4.1 ``` ![01257-3148478248.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/hifPWtcKFiSxXaJZbHlzx.png) ``` 6+girls, (chibi:1.2), sheep girl, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3148478248, Size: 768x512, Model hash: a355fdc3d9, Model: lametta_v1602_fp16, Denoising strength: 0.5, Hires upscale: 1.5, Hires steps: 8, Hires upscaler: 4x_fatal_Anime_500000_G, Version: v1.4.1 ``` --- ## v1601 v15xx็ณปใƒฌใ‚ทใƒ”ใ‚’ๅ†ๆง‹็ฏ‰ใ—ใŸใ‚‚ใฎใซๆ‹‡ๆŒ‡ๅง‘ๅจ˜v2.0ใ‚’ใƒžใƒผใ‚ธใ—ใพใ—ใŸ<br> ็ตตๆœฌใฎไธญใฎใ‚ˆใ†ใช้›ฐๅ›ฒๆฐ—ใŒๅ‡บใŸใ‚‰่‰ฏใ„ใชใ‚ใจใ„ใ†ใ‚ขใƒ—ใƒญใƒผใƒใงใ™<br> ๅ‡บๅŠ›ใฏClipskip2ๆŽจๅฅจใงใ™ใ€‚1ใฏๅคงใใ้ป„่‰ฒใธ่ปขใณใพใ™ใŒใ“ใ‚Œใ‚‚ใ“ใ‚Œใง้ข็™ฝใ„ใจๆ€ใ„ใพใ™<br> ![01263-3444924025.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/pql87gLhRe1KNImA2y2a6.png) ``` best quality, detailed illustration, 1girl, loli, child body, wolf girl, open mouth, skin fang, paw pose, outdoors, forest, night, full moon, Negative prompt: (worst quality, low quality:1.4), Steps: 30, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3444924025, Size: 512x768, Model hash: 2f57da9663, Model: lametta_v1601_fp16, Clip skip: 2, Version: v1.4.1 ``` ![01264-268483016.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/hXuHrDDl-nWrO1wj71CfS.png) ``` best quality, detailed illustration, 1girl, twin braid, blunt bangs,(loli:1.2),huge breasts, happy, smile,open mouth, pinafore dress, cowboy shot, rural, garden, dappled sunlight, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 268483016, Size: 512x768, Model hash: 2f57da9663, Model: lametta_v1601_fp16, Clip skip: 2, Version: v1.4.1 ``` ![01272-4052602564.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/e2Z6TtCmw5mHdkNUA1HsR.png) ``` best quality, detailed illustration, 1girl, loli, side ponytail, blonde hair short twintails, white dress, puffy short sleeves, happy, grin, train interior, suitcase, sitting, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4052602564, Size: 768x512, Model hash: 2f57da9663, Model: lametta_v1601_fp16, Clip skip: 2, Version: v1.4.1 ``` </details> <br> --- ![15xx.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/KnJFAxlgZwZVZFMaOE3fv.png) **v1504** , v1555, **v1574**๏ผš็›ฎใŒไธธใใฆๅคงใใ„ไธปๅŠ›ใƒขใƒ‡ใƒซ <details><summary><b>15xx็ณป่ฉณ็ดฐ</b></summary> ## v1574 v1555ใ‚’ใƒ™ใƒผใ‚นใซใ—ใฆCLIPๅ‘จใ‚Šใฎ่ฆ‹็›ดใ—ใ‚’ใ—ใŸใ‚‚ใฎใซใชใ‚Šใพใ™<br> ๆจช้•ท็”ป้ขใงใฎๅฎ‰ๅฎšๆ€งใชใฉใ‚’่งฃๆฑบใ—ใ‚ˆใ†ใจใ—ใพใ—ใŸใŒใ€็œผใ‚’่ฆ‹ๅผตใ‚‹ใปใฉใฎๆ”นๅ–„ใฏใชใ็ตๅฑ€ใฏ "bad anatomy" ใชใฉใ‚’ใƒใ‚ฌใซๅ…ฅใ‚Œใฆไฝฟใ†ๅฝขใจๆ€ใ„ใพใ™<br> v1504ไปฅ้™ใฏๅฐๆ”นไฟฎ็š„ใชใƒใƒผใ‚ธใƒงใƒณใ‚ขใƒƒใƒ—ใฐใ‹ใ‚Šใงใ™ใฎใงใ“ใฎใ‚ทใƒชใƒผใ‚บใฏใ“ใ‚Œใ‚’ไปฅใฃใฆใƒžใƒผใ‚ธ็ต‚ไบ†ใ‹ใชใจๆ€ใฃใฆใ„ใพใ™<br> ![01276-466810223.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/pwB9-m7VEWFkORlg-j980.png) ``` best quality, detailed illustration, 1gir,loli, blonde hair short twintails, white dress, puffy short sleeves, happy, grin, see-through, peace sign, outdoors, cityscape, cowboy shot, sunset, Negative prompt: (worst quality, low quality:1.4), covered navel, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 466810223, Size: 512x768, Model hash: 776f5e5678, Model: lametta_v1574_fp16, Version: v1.4.1 ``` ![01277-1146276385.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/FATEfMMTQ5LqWatUvNqsJ.png) ``` best quality, detailed illustration,1girl, solo, loli, bright room, pillows, seiza on bed, curtains,white short hair, purple eyes, white apron, light blue puffy short sleeves, light blue dress, hug stuffed bear, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1146276385, Size: 512x768, Model hash: 776f5e5678, Model: lametta_v1574_fp16, Version: v1.4.1 ``` ![01281-2894811173.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/gCYWibdqEF10C3olUqhNI.png) ``` best quality, detailed illustration,1girl, large breasts, hair flower, hairband, pointy ears, open mouth, happy, smile, mint polka dot bikini, light blush, water field, outdoors, Negative prompt: (worst quality, low quality:1.4), bad anatomy, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2894811173, Size: 768x512, Model hash: 776f5e5678, Model: lametta_v1574_fp16, Version: v1.4.1 ``` --- ## v1555 v15xxใ‚ทใƒชใƒผใ‚บใ‚’ๆŠœๆœฌ็š„ใช้ƒจๅˆ†ใ‹ใ‚‰fixใ—ใฆใฟใŸใฎใงใ™ใŒๆญฃ็›ดv1504ใจๅคงๅทฎใ‚ใ‚Šใพใ›ใ‚“<br> ็‰นๅฎšใฎLoRAใ‚’็ต„ใฟๅˆใ‚ใ›ใŸใจใใ‚„็‰นๅฎšใฎใƒ—ใƒญใƒณใƒ—ใƒˆใฎๅ‡บๅŠ›็ตๆžœใŒๅ‘ไธŠใ—ใฆใ„ใพใ™ใŒใ‚ใใพใงใ”ใไธ€้ƒจใงใ™<br> ๅ‰ฏไฝœ็”จใจใ—ใฆv1504ใ‚ˆใ‚Š็›ฎใŒๅฐใ•ใ‚ใซใชใ‚Šใพใ—ใŸใ€ใƒ—ใƒญใƒณใƒ—ใƒˆใง "big eyes" ใ‚„็›ฎใŒๅคงใใใชใ‚‹LoRAใชใฉใง่ฃœใˆใฐไปฅๅ‰ใจใปใผๅŒใ˜ใ‚ˆใ†ใซใชใ‚‹ใจๆ€ใ„ใพใ™<br> ![01285-4103269264.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/xPtkbRI9gtM6LrNysYDZc.png) ``` best quality, detailed illustration, loli, (brown rabbit girl:1.1), happy, smile, picnic basket, picnic seat, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 4103269264, Size: 512x768, Model hash: fc287aa054, Model: lametta_v1555_fp16, Version: v1.4.1 ``` ![01287-1169474282.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/MDDrzY4Laukou67Yq9Teg.png) ``` best quality, detailed illustration,1girl,loli, nurse, standing, hands on hips, (hospital:1.2), White Pantyhose, cowboy shot, Negative prompt: (worst quality, low quality:1.4),(red cross:1.2), covered navel, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1169474282, Size: 512x768, Model hash: fc287aa054, Model: lametta_v1555_fp16, Version: v1.4.1 ``` ![01289-318480518.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/ATrPqU4a5ZHSCGw1SgKx3.png) ``` best quality, detailed illustration, 1girl, loli, fairy, fairy wings, floating, (floral background:1.2), flowers, nature, lake, blue sky, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 318480518, Size: 768x512, Model hash: fc287aa054, Model: lametta_v1555_fp16, Version: v1.4.1 ``` --- ## v1504 ้ชจๆ ผใฏv13xx็ณปใ‚’ใใฎใพใพใซไธธใ„ใ‚ฟใƒฌ็›ฎใชๅ‡บๅŠ›ใŒ็‰นๅพดใฎใƒขใƒ‡ใƒซใงใ€v1503๏ผˆlametta_oldๅดใซใ‚ใ‚Šใพใ™๏ผ‰ใ‚’fixใ—ใŸใ‚‚ใฎใจใชใ‚Šใพใ™<br> ๅˆ‡ใ‚Œ้•ทใช็›ฎๅ…ƒใฎๅฅณใฎๅญใƒขใƒ‡ใƒซใฏ็ฐกๅ˜ใซ่ฆ‹ใคใ‹ใ‚‹ใฎใซๅ‘ชๆ–‡ๆŒ‡ๅฎšใ›ใšใจใ‚‚ใพใ‚“ใพใ‚‹ใŠ็›ฎใ€…ใฎๅฅณใฎๅญใ‚’ๅ‡บๅŠ›ใ—ใฆใใ‚Œใ‚‹ใƒขใƒ‡ใƒซใŒใชใ‹ใชใ‹็„กใ„ใญ๏ผŸใ˜ใ‚ƒใ‚ไฝœใ‚‹ใ‹๏ผใŒlamettaใฎ็›ฎ็š„ใฎไธ€ใคใ ใฃใŸใฎใงใ‚„ใฃใจใฒใจใคใฎใ‚ดใƒผใƒซใซ่กŒใ็€ใ„ใŸๆ„ŸใŒใ‚ใ‚Šใพใ™<br> ๏ผˆไปŠใฏไธธใใฆใ‹ใ‚ใ„ใ„ใŠ็›ฎใ€…ใฎใƒขใƒ‡ใƒซ็ตๆง‹ใ‚ใ‚‹ใ‚ˆใญ๏ผ๏ผ‰<br> ![00617-2686433535.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/jrOIc2O2CiMSRhDbCbjEq.png) ``` best quality, detailed illustration,1girl, flat_chest,(loli:1.2),(child body:1.1), blond long hair, blue eyes, ( polka dot sleeveless dress:1.2), white wide brim hat, outdoor, lifted by self, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 2686433535, Size: 512x768, Model hash: 1b0a6619fa, Model: lametta_v1504_fp16, Version: v1.4.1 ``` ![00616-1170522170.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/0hvt2dikrPUSQKT-iZ7UE.png) ``` best quality, detailed cg, 1girl, (loli:1.1), pajamas, yawning, one eye closed, hand on own mouth, fuzzy hair, Negative prompt: (worst quality, low quality:1.4), Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1170522170, Size: 512x768, Model hash: 1b0a6619fa, Model: lametta_v1504_fp16, Version: v1.4.1 ``` ![00615-1069866765.png](https://cdn-uploads.huggingface.co/production/uploads/64172e2f1f1f3b0fa80ce889/FgekuoqPDk1tHoKTgzxvW.png) ``` best quality, detailed illustration,1girl,(loli:1.2), pink twintails, pointy ears, ahoge, grin, black dress, on stomach, on bed, Negative prompt: (worst quality, low quality:1.4), bad anatomy, Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 1069866765, Size: 768x512, Model hash: 1b0a6619fa, Model: lametta_v1504_fp16, Version: v1.4.1 ``` </details> <br><br> --- **lametta Merge Model**ใ€€๏ผš lamettaใ‚’ใƒžใƒผใ‚ธใ—ใŸใƒขใƒ‡ใƒซใฎใ”็ดนไป‹ ใ“ใกใ‚‰ใงๆŠŠๆกใงใใŸใ‚‚ใฎใ ใ‘ใ€ใฉใ‚Œใ‚‚ใพใŸไธ€ๅ‘ณ้•ใ†ใฎใงใ‚ˆใ‚ŠๅฅฝใฟใŒ่ฆ‹ใคใ‹ใ‚‹ใ‹ใ‚‚ ## nadenadeๆฐ - [nadenadesitai](https://civitai.com/models/79846/) lamettaใฎๅง‰ๅฆนใƒขใƒ‡ใƒซ - [surisurisitai](https://civitai.com/models/82885/) nadenadeใŒใ‚ธใƒˆ็›ฎใซใชใฃใฆใ‚„ใฃใฆใใŸ - [funifunisitai](https://civitai.com/models/113985/) surisuriใŒใƒ‡ใƒ•ใ‚ฉใƒซใƒกใ•ใ‚Œใฆใ‚ˆใ‚Šๅฏๆ„›ใใชใฃใŸ๏ผ ## Yntecๆฐ - [lamettaRemix](https://huggingface.co/Yntec/lamettaRemix) v1745ใจv1602ใฎใƒžใƒผใ‚ธใƒขใƒ‡ใƒซ - [LAMEanime & lamettaSEXTILION](https://huggingface.co/Yntec/LAMEanime) lamettaRemixใจanimeSEXTILLIONใฎใƒžใƒผใ‚ธใƒขใƒ‡ใƒซ ็ด ๆใจใ—ใฆใ‚‚ไฝฟใฃใฆใ‚‚ใ‚‰ใˆใ‚‹ใฎใฏๆœฌๅฝ“ใซๅฌ‰ใ—ใ„ใงใ™ใ€‚ <br> --- # ใ‚ฏใƒฌใ‚ธใƒƒใƒˆ ใƒžใƒผใ‚ธใซไฝฟ็”จใ•ใ›ใฆใ„ใŸใ ใ„ใŸใƒขใƒ‡ใƒซ๏ผˆๆ•ฌ็งฐ็•ฅ๏ผ‰ - ACertaintyใ€€@JosephusCheung (LoRA) - Counterfeit-V2.2ใ€€@gsdf (v1,v2,v3) - SSSSLLDDLL v1ใ€€@kgmkm (v9) - CoffeeNSFW v1.0ใ€€@CoffeeCoffee (v2) - Anime Chibi Modelใ€€@AiRetard (v412,v413) - DDosMix_v2ใ€€@DiaryOfSta (v5,v9,v13) - AniDosMix_Aใ€€@DiaryOfSta (v9,v13) - QteaMixใ€€@chenxluo (v13็ณป) - NeatNess Fluffy Fur Mix v1.0,v2.0,v3.0,Unicorn edition,Infinity,ใ€€@NeatNess (v9,v13) - mix-proV3,V3.5,V4,V4.5+ColorBox,ใ€€@P317cm (v13,v1503,v1504) - CuteYukiMix v1.0,v3.0ใ€€@newlifezfztty761 (v1503,v1504) - Ares Mix v0.1ใ€€@rocp (v1503,v1504) - Doll Like Animeใ€€@PromptSharingSamaritan (v1523) - Grilled_Lamprey v2627ใ€€@Liquidn2 (v1523) - Yuzu v1.0ใ€€@Ikena (v1523) - Defacta3th v1.0ใ€€@Aihub_tokyo (v1555) - Coconut furry mixใ€€@YukiLaneige (FU) - Sweet Factoryใ€€@RIXYN (v1555) - AkkaiMixใ€€@Akkairosu (v1574) - ๆ‹‡ๆŒ‡ๅง‘ๅจ˜๏ผˆThumbelina๏ผ‰v2.0ใ€€@Cinsdia (v1601,v1602,v1604) - CookieCutter Flex v1.01,Flex v3.5ใ€€@Kybalico ([email protected],[email protected]) - SweetParfaitใ€€@sleepotimer (v1720) - ToraFurryMix v2.0ใ€€@tlano (v1745) - S-flat-nullpo-testBBB4 @nullpox (v1921,v1922) - NuipeniMix ver.2 @McSionnaigh (v1921,v1922) - WateryAbyss @The_Missing_Models (v1921,v1922) - Simple ink-prt @Yuno779ใ€€(v2012) - Rabbit v6 @Rabbit_YourMajesty (v2012) - ClearVAE v1.1(Variant) @RedRayz (v19,v20) - flat1,flat2,boldline,bigeye,hanme @2vXpSwA7 (V13,FD) ๅ…จใƒขใƒ‡ใƒซใซใ“ใ‚Œใ‚‰ใ™ในใฆใŒใƒžใƒผใ‚ธใ•ใ‚Œใฆใ„ใ‚‹ใ‚ใ‘ใงใฏใ‚ใ‚Šใพใ›ใ‚“ใŒไธ€ๆ‹ฌใ—ใฆใ‚ฏใƒฌใ‚ธใƒƒใƒˆ่จ˜่ผ‰ใ•ใ›ใฆใ„ใŸใ ใใพใ™ใ€‚<br> ่จ˜ๆ†ถใจใƒžใƒผใ‚ธๅฑฅๆญดใ‹ใ‚‰่ฟฝใˆใ‚‹ใ‚‚ใฎใฏๆ‹ฌๅผงๆ›ธใใซๅ…ฅใ‚Œใฆใฟใพใ—ใŸใŒๅคใ„ใƒขใƒ‡ใƒซใฏใ‚ใพใ‚Šๆญฃ็ขบใงใฏใชใ„ใ‹ใ‚‚ใ€‚<br> v2ใ‹ใ‚‰ๆ—งใƒใƒผใ‚ธใƒงใƒณใ‚’็ง˜ไผใฎใ‚ฟใƒฌใฟใŸใ„ใซ็ถ™ใŽ่ถณใ—ไฝฟใ„v9ใพใงใงไธ€ๆ—ฆๅŒบๅˆ‡ใ‚Šใ€v13ใ‹ใ‚‰ๆ–ฐ่ฆใง็ง˜ไผใฎใ‚ฟใƒฌใ‚’ไฝœใ‚Š็ถ™ใŽ่ถณใ™ใ‚ˆใ†ใชใƒฌใ‚ทใƒ”ใซใชใฃใฆใ„ใพใ™ใ€‚<br> <br><br> # ๅˆฉ็”จใซ้š›ใ—ใฆ๏ผˆใƒฉใ‚คใ‚ปใƒณใ‚นใชใฉ๏ผ‰ ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซๅ…จใฆใซใŠใ„ใฆ[creativeml-openrail-m](https://huggingface.co/spaces/CompVis/stable-diffusion-license)ใซๆบ–ใ˜ใพใ™ใ€‚ ่ฉณใ—ใใฏใ€Œcreativeml-openrail-mใ€ใงๆคœ็ดขใ—ใฆใ‚‚ใ‚‰ใˆใ‚Œใฐ็ฟป่จณใ•ใ‚ŒใŸ่งฃ่ชฌใชใฉใŒ็ขบ่ชใงใใ‚‹ใจๆ€ใ„ใพใ™ใ€‚<br> Attachment Aใฎ่ฃœ่ถณใจใ—ใฆใ€็‰นๅฎšใฎไฝœๅ“ใ‚„ไฝœ้ขจใชใฉใ‚’ๆจกๅ€ฃใ—ใฆใใฎๆจฉๅˆฉ่€…็ญ‰ใซ่ฟทๆƒ‘ใจใชใ‚‹ใ‚ˆใ†ใชไฝฟ็”จใฏ็ฆๆญขใจใ•ใ›ใฆใ„ใŸใ ใใพใ™ใ€‚<br> <br> civitai้ขจใช่กจ่จ˜ใงใ™ใจไปฅไธ‹ใฎ้€šใ‚Š<br> <span class="text-green-500">OK</span>ใ€€ใ‚ฏใƒฌใ‚ธใƒƒใƒˆใ‚’ๅ…ฅใ‚Œใšใซใƒขใƒ‡ใƒซใ‚’ไฝฟ็”จใ™ใ‚‹<br>๏ผˆUse the model without crediting the creator๏ผ‰<br> ็”Ÿๆˆ็”ปๅƒใซใ‚ฏใƒฌใ‚ธใƒƒใƒˆใฎๆœ‰็„กใฏๅ•ใ„ใพใ›ใ‚“ใ€ใƒžใƒผใ‚ธ็ด ๆใจใ—ใฆใ‚‚ๆœ‰็„กใฏๅ•ใ„ใพใ›ใ‚“ใŒใ‚ใ‚‹ใจๅ–œใณใพใ™ <span class="text-green-500">OK</span>ใ€€็”Ÿๆˆใ—ใŸ็”ปๅƒใ‚’่ฒฉๅฃฒใ™ใ‚‹<br>๏ผˆSell images they generate๏ผ‰<br> ็”Ÿๆˆใ—ใŸ็”ปๅƒใฏใ‚ใชใŸใฎไฝœ็”ปๆ„ๅ›ณใŒ่พผใ‚ใ‚‰ใ‚Œใฆใ„ใพใ™ใ‹ใ‚‰ใ‚ใชใŸใฎใ‚‚ใฎใงใ™ <span class="text-green-500">OK</span>ใ€€ๆœ‰ๅ„Ÿใฎ็”ปๅƒใ‚’็”Ÿๆˆใ™ใ‚‹ใ‚ตใƒผใƒ“ใ‚นใ‚’้‹ๅ–ถใ™ใ‚‹<br>๏ผˆRun on services that generate images for money๏ผ‰<br> ใƒขใƒ‡ใƒซๅใฎ่กจ่จ˜ใ‚’ใ—ใฆใ„ใŸใ ใ‘ใ‚Œใฐๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€ๆœซๅฐพใฎ "_fp16" ใฏ็œ็•ฅใ—ใฆๆง‹ใ„ใพใ›ใ‚“ <span class="text-green-500">OK</span>ใ€€ใ“ใฎใƒขใƒ‡ใƒซใ‚’ไฝฟใฃใŸใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹<br>๏ผˆShare merges using this model๏ผ‰<br> ่‡ช็”ฑใซ่กŒใฃใฆใ„ใŸใ ใ„ใฆๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใ€ไธŠ่จ˜ใฎ้€šใ‚Šใ‚ฏใƒฌใ‚ธใƒƒใƒˆใฎๆœ‰็„กใฏๅ•ใ„ใพใ›ใ‚“ใŒใ—ใฆใ„ใŸใ ใ‘ใ‚‹ใจๅ–œใณใพใ™ <span class="text-red-500">NG</span>ใ€€ใ“ใฎใƒขใƒ‡ใƒซใพใŸใฏใ“ใฎใƒขใƒ‡ใƒซใ‚’ไฝฟใฃใŸใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใ‚’่ฒฉๅฃฒใ™ใ‚‹<br>๏ผˆSell this model or merges using this model๏ผ‰<br> ใ“ใฎใƒขใƒ‡ใƒซใฏๅฝ“ๆ–นใซ็„กๆ–ญใง่ฒฉๅฃฒใฏๅ‡บๆฅใพใ›ใ‚“ใ€ใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใซใคใ„ใฆใฏๆ‰‹ใ‚’ๅŠ ใˆใŸๆ–นใฎ่ฒฌไปปใจใ—ใฆใ“ใกใ‚‰ใฏไธ€ๅˆ‡้–ขไธŽใ„ใŸใ—ใพใ›ใ‚“ <span class="text-green-500">OK</span>ใ€€ใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใ‚’ๅ…ฑๆœ‰ใ™ใ‚‹้š›ใซ็•ฐใชใ‚‹ๆจฉ้™ใ‚’ๆŒใŸใ›ใ‚‹<br>๏ผˆHave different permissions when sharing merges๏ผ‰<br> ๅ•้กŒใ‚ใ‚Šใพใ›ใ‚“ใŒไธŠ่จ˜ใฎ้€šใ‚Šๆ‰‹ใ‚’ๅŠ ใˆใŸๆ–นใฎ่ฒฌไปปใจใ—ใฆๆœ‰ๅˆฉไธๅˆฉใซ้–ขใ‚ใ‚‰ใšใ“ใกใ‚‰ใฏไธ€ๅˆ‡ใฎ้–ขไธŽใ‚’ใ„ใŸใ—ใพใ›ใ‚“ <br><br> ไปฅไธŠ <br> NAIใฎไปถใซใคใ„ใฆ<br> NAIใƒชใƒผใ‚ฏใƒขใƒ‡ใƒซใ‚’็›ดๆŽฅไฝฟ็”จใฏๅ…จใƒขใƒ‡ใƒซใซใŠใ„ใฆ่กŒใฃใฆใŠใ‚Šใพใ›ใ‚“ใ€‚ใพใŸNAIใ‚’็›ดๆŽฅ็š„ใซไฝฟ็”จใ—ใŸใจๆ˜Ž่จ˜ใ•ใ‚Œใฆใ„ใ‚‹ใƒขใƒ‡ใƒซใ‚‚้ฟใ‘ใŸใคใ‚‚ใ‚Šใงใ™ใ€‚<br> ใ—ใ‹ใ—ใชใŒใ‚‰ใƒžใƒผใ‚ธ็ด ๆใฎใ™ในใฆใงๅๅˆ†ใ‹ใค็ขบๅฎŸใชๆคœ่จผใŒใงใใ‚‹ใ‚ใ‘ใงใ‚‚ใชใ„ใŸใ‚ใ€ใฉใ“ใ‹ใ—ใ‚‰ใ‹ใ‚‰ใฎ็ตŒ่ทฏใงๆททๅ…ฅใ—ใฆใ„ใ‚‹ใจๆ€ใ‚ใ‚Œใพใ™ใ€‚<br> ไปถใฎTwitterใซใŠใ„ใฆใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใธใฎๅ•ใ„ใ‹ใ‘ใซๅฏพใ—ใฆ่ฆ‹ใˆใ‚‹ๅฝขใงไฝ•ใฎๅ›ž็ญ”ใ‚‚ใชใ•ใ‚Œใฆใ„ใชใ„ใ“ใจใ‹ใ‚‰็พ็Šถใฏใ“ใฎใพใพใจใ—ใพใ™ใ€‚<br> ใ‚‚ใ—ใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใซๅฏพใ—ใฆใฎๆ˜Ž็ขบใชๅ›ž็ญ”ใŒใ‚ใฃใŸๅ ดๅˆใฏใใฎๅ›ž็ญ”ๅฆ‚ไฝ•ใงๅ…ฌ้–‹็ต‚ไบ†ใจใชใ‚‹ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใฎใงใ”ๆ‰ฟ็ŸฅใŠใใใ ใ•ใ„ใ€‚<br> <br> # ่›‡่ถณ ๆœ€่ฟ‘ใฏใกใ‚‡ใฃใจใ‚นใƒฉใƒณใƒ—ๆฐ—ๅ‘ณใชใฎใ‹ไฝ•ใ‚’่ชฟๆ•ดใ—ใฆใ‚‚ใ—ใฃใใ‚Šใ“ใชใ„ใงใ™ใ€‚<br>
cledoux42/Age_Classify_v001
cledoux42
"2023-04-18T07:23:59Z"
1,397
4
transformers
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2023-04-18T07:23:40Z"
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Age_Classify_v001 results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.5719775557518005 --- # Age_Classify_v001 Autogenerated by HuggingPics๐Ÿค—๐Ÿ–ผ๏ธ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### 0-2 ![0-2](images/0-2.jpg) #### 10-19 ![10-19](images/10-19.jpg) #### 20-29 ![20-29](images/20-29.jpg) #### 3-9 ![3-9](images/3-9.jpg) #### 30-39 ![30-39](images/30-39.jpg) #### 40-49 ![40-49](images/40-49.jpg) #### 50-59 ![50-59](images/50-59.jpg) #### 60-69 ![60-69](images/60-69.jpg) #### 70-79 ![70-79](images/70-79.jpg)
agemagician/mlong-t5-tglobal-base
agemagician
"2023-05-21T18:51:42Z"
1,397
7
transformers
[ "transformers", "pytorch", "jax", "longt5", "text2text-generation", "multilingual", "af", "am", "ar", "az", "be", "bg", "bn", "ca", "ceb", "co", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fil", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "haw", "hi", "hmn", "ht", "hu", "hy", "ig", "is", "it", "iw", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lb", "lo", "lt", "lv", "mg", "mi", "mk", "ml", "mn", "mr", "ms", "mt", "my", "ne", "nl", "no", "ny", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "sm", "sn", "so", "sq", "sr", "st", "su", "sv", "sw", "ta", "te", "tg", "th", "tr", "uk", "und", "ur", "uz", "vi", "xh", "yi", "yo", "zh", "zu", "dataset:mc4", "arxiv:2305.11129", "arxiv:1912.08777", "arxiv:2112.07916", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-05-19T20:49:19Z"
--- license: apache-2.0 language: - multilingual - af - am - ar - az - be - bg - bn - ca - ceb - co - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fil - fr - fy - ga - gd - gl - gu - ha - haw - hi - hmn - ht - hu - hy - ig - is - it - iw - ja - jv - ka - kk - km - kn - ko - ku - ky - la - lb - lo - lt - lv - mg - mi - mk - ml - mn - mr - ms - mt - my - ne - nl - no - ny - pa - pl - ps - pt - ro - ru - sd - si - sk - sl - sm - sn - so - sq - sr - st - su - sv - sw - ta - te - tg - th - tr - uk - und - ur - uz - vi - xh - yi - yo - zh - zu datasets: - mc4 --- # MLongT5 (transient-global attention, base-sized model) MLongT5 model pre-trained on Multi-language corpus. The model was introduced in the paper [mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences](https://arxiv.org/pdf/2305.11129.pdf) by Uthus et al. and first released in [the LongT5 repository](https://github.com/google-research/longt5). All the model architecture and configuration can be found in [Flaxformer repository](https://github.com/google/flaxformer) which uses another Google research project repository [T5x](https://github.com/google-research/t5x). Disclaimer: The team releasing MLongT5 did not write a model card for this model so this model card has been written by Ahmed Elnaggar. ## Model description MLongT5 model is an encoder-decoder transformer pre-trained in a text-to-text denoising generative setting ([Pegasus-like generation pre-training](https://arxiv.org/pdf/1912.08777.pdf)). MLongT5 model is an extension of [LongT5 model](https://arxiv.org/abs/2112.07916), and it enables using one of the two different efficient attention mechanisms - (1) Local attention, or (2) Transient-Global attention. The usage of attention sparsity patterns allows the model to efficiently handle input sequence. MLongT5 is particularly effective when fine-tuned for text generation (summarization, question answering) which requires handling long input sequences (up to 16,384 tokens). ## Intended uses & limitations The model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=mlongt5) to look for fine-tuned versions on a task that interests you. ### How to use ### How to use The following shows how one can extract the last hidden representation for the model. ```python from transformers import T5Tokenizer, LongT5Model tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base") model = LongT5Model.from_pretrained("agemagician/mlong-t5-tglobal-base") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` The following shows how one can predict masked passages using the different denoising strategies. ### S-Denoising For *S-Denoising*, please make sure to prompt the text with the prefix `[S2S]` as shown below. ```python from transformers import LongT5ForConditionalGeneration, T5Tokenizer import torch model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base") input_string = "[S2S] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man with a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere <extra_id_0>" inputs = tokenizer(input_string, return_tensors="pt").input_ids.to("cuda") outputs = model.generate(inputs, max_length=200) print(tokenizer.decode(outputs[0])) ``` ### R-Denoising For *R-Denoising*, please make sure to prompt the text with the prefix `[NLU]` as shown below. ```python from transformers import LongT5ForConditionalGeneration, T5Tokenizer import torch model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base") input_string = "[NLU] Mr. Dursley was the director of a firm called <extra_id_0>, which made <extra_id_1>. He was a big, solid man with a bald head. Mrs. Dursley was thin and <extra_id_2> of neck, which came in very useful as she spent so much of her time <extra_id_3>. The Dursleys had a small son called Dudley and <extra_id_4>" inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda") outputs = model.generate(inputs, max_length=200) print(tokenizer.decode(outputs[0])) ``` ### X-Denoising For *X-Denoising*, please make sure to prompt the text with the prefix `[NLG]` as shown below. ```python from transformers import LongT5ForConditionalGeneration, T5Tokenizer import torch model = LongT5ForConditionalGeneration.from_pretrained("agemagician/mlong-t5-tglobal-base", low_cpu_mem_usage=True, torch_dtype=torch.bfloat16).to("cuda") tokenizer = T5Tokenizer.from_pretrained("agemagician/mlong-t5-tglobal-base") input_string = "[NLG] Mr. Dursley was the director of a firm called Grunnings, which made drills. He was a big, solid man wiht a bald head. Mrs. Dursley was thin and blonde and more than the usual amount of neck, which came in very useful as she spent so much of her time craning over garden fences, spying on the neighbours. The Dursleys had a small son called Dudley and in their opinion there was no finer boy anywhere. <extra_id_0>" model.cuda() inputs = tokenizer(input_string, return_tensors="pt", add_special_tokens=False).input_ids.to("cuda") outputs = model.generate(inputs, max_length=200) print(tokenizer.decode(outputs[0])) ``` ### BibTeX entry and citation info ```bibtex @misc{uthus2023mlongt5, title={mLongT5: A Multilingual and Efficient Text-To-Text Transformer for Longer Sequences}, author={David Uthus and Santiago Ontaรฑรณn and Joshua Ainslie and Mandy Guo}, year={2023}, eprint={2305.11129}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` > Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
TheBloke/Project-Baize-v2-13B-GPTQ
TheBloke
"2023-08-21T12:01:10Z"
1,397
11
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:2304.01196", "license:other", "autotrain_compatible", "text-generation-inference", "4-bit", "gptq", "region:us" ]
text-generation
"2023-05-24T11:37:40Z"
--- inference: false license: other --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Project Baize V2 13B GPTQ These files are GPTQ 4bit model files for [Project Baize V2 13B](https://huggingface.co/project-baize/baize-v2-13b). It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). ## Other repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Project-Baize-v2-13B-GPTQ) * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/Project-Baize-v2-13B-GGML) * [Original unquantised fp16 model in HF format](https://huggingface.co/project-baize/baize-v2-13b) ## How to easily download and use this model in text-generation-webui Open the text-generation-webui UI as normal. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/Project-Baize-v2-13B-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. Click the **Refresh** icon next to **Model** in the top left. 6. In the **Model drop-down**: choose the model you just downloaded, `Project-Baize-v2-13B-GPTQ`. 7. If you see an error in the bottom right, ignore it - it's temporary. 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = 128`, `model_type = Llama` 9. Click **Save settings for this model** in the top right. 10. Click **Reload the Model** in the top right. 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## Provided files **Compatible file - Baize-v2-13B-4bit-128g.no-act-order.safetensors** In the `main` branch - the default one - you will find `Baize-v2-13B-4bit-128g.no-act-order.safetensors` This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility It was created without the `--act-order` parameter. It may have slightly lower inference quality compared to the other file, but is guaranteed to work on all versions of GPTQ-for-LLaMa and text-generation-webui. * `Baize-v2-13B-4bit-128g.no-act-order.safetensors` * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with text-generation-webui one-click-installers * Parameters: Groupsize = 128g. No act-order. * Command used to create the GPTQ: ``` python llama.py /workspace/ggml/TheBloke_Project-Baize-v2-13B-GGML/HF wikitext2 --wbits 4 --true-sequential --groupsize 128 --save_safetensors /workspace/ggml/TheBloke_Project-Baize-v2-13B-GGML/gptq/Baize-v2-13B-4bit-128g.no-act-order.safetensors ``` <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, ้˜ฟๆ˜Ž, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieล‚, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> # Original model info on Github ## News - **[May 23, 2023]** We are releasing Baize v2! Check out the [7B](https://huggingface.co/project-baize/baize-v2-7b) and [13B](https://huggingface.co/project-baize/baize-v2-13b) model. Code coming soon! - **[Apr. 27, 2023]** [Fastchat](https://github.com/lm-sys/FastChat) now supports Baize. Try the new [CLI and API](https://github.com/project-baize/baize-chatbot#cli-and-api)! - **[Apr. 21, 2023]** We now have a [script](https://github.com/project-baize/baize-chatbot#merge-lora-into-llama) to merge LoRA weights into standard HF model so you can use it everywhere HF is supported! ## What's Baize? Baize is an open-source chat model trained with [LoRA](https://github.com/microsoft/LoRA). It uses 100k dialogs generated by letting ChatGPT chat with itself. We also use Alpaca's data to improve its performance. We have released 7B, 13B and 30B models. Please refer to the [paper](https://arxiv.org/pdf/2304.01196.pdf) for more details. ## Why it's called Baize? Baize (pronounced as By-zor; Simplified Chinese ็™ฝๆณฝ, Traditional Chinese ็™ฝๆพค, Japanese ็™ฝๆฒข, ใฏใใŸใ) is a mythical creature in Chinese folklore, who speaks human languages and knows everything. This is exactly what we expect from a chat model. ## Overview โš ๏ธ All model weights and data are for **research use ONLY**. Commercial use is **strictly prohibited**. We accept **NO responsibility or liability** for any use of our data, code or weights. This is the repo for the Baize project, which aims to build a chat model with LLaMA. This repository contains: - 54K/57K/47K [dialogs](data) from Quora, StackOverFlow and MedQuAD questions - The [code](collect.py) for collecting self-chat data - The [code](finetune.py) for training Baize - The [code](demo/app.py) for chat model demo (forked from [ChuanhuChatGPT](https://github.com/GaiZhenbiao/ChuanhuChatGPT)) ### Model Release #### V1 - [Baize-v1-7B (LoRA weights)](https://huggingface.co/project-baize/baize-lora-7B) - [Baize-v1-13B (LoRA weights)](https://huggingface.co/project-baize/baize-lora-13B) - [Baize-v1-30B (LoRA weights)](https://huggingface.co/project-baize/baize-lora-30B) - [Baize Healthcare-7B (LoRA weights)](https://huggingface.co/project-baize/baize-healthcare-lora-7b) #### V2 - [Baize-v2-7B](https://huggingface.co/project-baize/baize-v2-7b) - [Baize-v2-13B](https://huggingface.co/project-baize/baize-v2-13b) ### Community Models and Data - [Fauno](https://github.com/RSTLess-research/Fauno-Italian-LLM/) is an Italian version of Baize. - [Dutch Data](https://github.com/project-baize/baize-chatbot/issues/34): Baize data translated into Dutch. ## CLI and API Now you can use Baize with [Fastchat](https://github.com/lm-sys/FastChat) for the CLI and API provided by Fastchat! First, install the latest version of Fastchat: ```bash pip install git+https://github.com/huggingface/peft.git pip install git+https://github.com/lm-sys/FastChat.git ``` (For v1 models only): Merge Baize's LoRA weights into LLaMA. Take 7B checkpoint as an example. ```bash # Note you have to include "baize" in the target directory so Fastchat can recognize Baize. python3 -m fastchat.model.apply_lora --base huggyllama/llama-7b --target ./model_weights/baize-7b --lora project-baize/baize-lora-7B ``` Now, run the CLI in your terminal! More options and configs can be found [here](https://github.com/lm-sys/FastChat#inference-with-command-line-interface). ```bash # Optional: Add `--style rich` for better style. python -m fastchat.serve.cli --model-path ./model_weights/baize-7b ``` You can use Baize with OpenAI API or Hugging Face API following the instruction [here](https://github.com/lm-sys/FastChat#api). ## Demo [![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-md.svg)](https://huggingface.co/spaces/project-baize/Baize-7B) [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-md.svg)](https://huggingface.co/spaces/project-baize/Baize-7B?duplicate=true) <p align="center"> <img alt="Demo" src="https://user-images.githubusercontent.com/22514219/229863275-0e83c1cf-0661-4afa-9a47-1ce20fb521ae.gif"> </p> You can either host it on your local machine or access the [online demo](https://huggingface.co/spaces/project-baize/Baize-7B). The demo fetches the [LLaMA](https://huggingface.co/huggyllama/llama-7b) model and the [LoRA weights](https://huggingface.co/project-baize/baize-lora-7B) from the Hugging Face model hub, then runs a user-friendly Gradio interface for chatting. ### How to Run Locally First, make sure your Python version is 3.8, and then install the required packages using the command below: ```bash cd demo pip install -r requirements.txt ``` You can host the model on your local machine using the following command: ```bash # We assume you have obtained access to use LLaMA. The following LLaMA weights are from a 3rd party. base_model=huggyllama/llama-7b lora_model=project-baize/baize-lora-7B python app.py $base_model $lora_model ``` #### GPU VRAM Requirements | | Inference (without int8) | |-----------|--------------------------| | Baize-7B | 16GB | | Baize-13B | 28GB | | Baize-30B | 67GB | If you have a GPU with smaller VRAM, you can do inference with `int8`, by passing the 8bit argument: ```bash python app.py $base_model $lora_model 8bit ``` ## How to Reproduce ### Setup 1. Install dependencies ```bash pip install -r requirements.txt ``` 2. If `bitsandbytes` doesn't work, [install it from source](https://github.com/TimDettmers/bitsandbytes/blob/main/compile_from_source.md). Windows users can follow [these instructions](https://github.com/tloen/alpaca-lora/issues/17). ### Data Collecting You can use our [released data](data) or collect the data from ChatGPT using the following command: ```bash num_process=10 # The number of processes to collect data max_total_tokens=500000 # Set maximum numbers of tokens to collect data api_key=xxxxxxxxxxxxxxxxx # Set your openai api key for ((i=0; i<$num_process; i++)) do python collect.py $api_key $max_total_tokens $i $num_process stackoverflow & python collect.py $api_key $max_total_tokens $i $num_process quora & python collect.py $api_key $max_total_tokens $i $num_process medical & done ``` After collecting data, you use the following command to preprocess data: ```bash python preprocess.py stackoverflow python preprocess.py quora python preprocess.py medical ``` ### Use your own data If there's a specific dataset you want to use as seeds for ChatGPT self-chatting, you can simply modify `collect.py` to load your own data. ### Training The fine-tuning code is designed to run on an A100-80G GPU. The `finetune.py` script accepts three parameters: foundation model size (i.e., 7B, 13B, or 30B), batch size, learning rate and datasets. Note the total batch size is fixed to 64 (can be modified [here](https://github.com/project-baize/baize/blob/cbcf39902fcdfab8d935b7ea771a4e7d452a1be0/finetune.py#L24)) and the batch size here is the per device batch size before gradient accumulation. Set it to a smaller value if you are training on a GPU with smaller VRAM. ```bash # For the 7B model (takes about 9 hours) python finetune.py 7b 32 0.0002 alpaca,stackoverflow,quora # For the 13B model (takes about 16 hours) python finetune.py 13b 16 0.0001 alpaca,stackoverflow,quora # For the 30B model (takes about 36 hours) python finetune.py 30b 8 0.00005 alpaca,stackoverflow,quora ``` #### GPU VRAM Consumption With the settings ABOVE: | | Training (with int8) | |-----------|----------------------| | Baize-7B | 26GB | | Baize-13B | 25GB | | Baize-30B | 42GB | Got a question? See [this issue](https://github.com/project-baize/baize-chatbot/issues/26). ### Merge LoRA into LLaMA Now you can easily merge the trained LoRA weights into a LLaMA model so you can use it with everything that supports standard Hugging Face API! Here's an example for merging `baize-lora-7B` into LLaMA-7B. ```bash python merge_lora.py \ --base huggyllama/llama-7b \ --target ~/model_weights/baize-7b \ --lora project-baize/baize-lora-7B ``` ## Citation ```bibtex @article{xu2023baize, title={Baize: An Open-Source Chat Model with Parameter-Efficient Tuning on Self-Chat Data}, author={Xu, Canwen and Guo, Daya and Duan, Nan and McAuley, Julian}, journal={arXiv preprint arXiv:2304.01196}, year={2023} } ``` <hr> [![Share to Community](https://huggingface.co/datasets/huggingface/badges/raw/main/powered-by-huggingface-light.svg)](https://huggingface.co)
digiplay/DreamShaper_8
digiplay
"2023-07-30T14:30:18Z"
1,397
15
diffusers
[ "diffusers", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-07-30T13:39:08Z"
--- license: other tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true --- Model info: https://civitai.com/models/4384?modelVersionId=128713 Original Author's DEMO images : ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/dd9b038c-bd15-43ab-86ab-66e145ad7ff2/width=1096/26072158-132340247-8k%20portrait%20of%20beautiful%20cyborg%20with%20brown%20hair,%20intricate,%20elegant,%20highly%20detailed,%20majestic,%20digital%20photography,%20art%20by%20artg_ed.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/c1033497-007c-4a73-b812-915c8e32e8fe/width=1120/26072224-5775713-(masterpiece),%20(extremely%20intricate_1.3),%20(realistic),%20portrait%20of%20a%20girl,%20the%20most%20beautiful%20in%20the%20world,%20(medieval%20armor),%20me.jpeg) ![](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/e5f49ec5-62f7-4511-8be6-517042729091/width=1352/26072419-1584580292-masterpiece,%20(photorealistic_1.4),%20best%20quality,%20beautiful%20lighting,%20(ulzzang-6500_0.5),%20lucy%20_(cyberpunk_),%201girl,%20white%20hair,.jpeg) Sample image generated by huggingface's API : ![a04f24a1-7301-4458-a292-a8f41636c617.jpeg](https://cdn-uploads.huggingface.co/production/uploads/646c83c871d0c8a6e4455854/8wrFo-Rq2mHn0x8Kwu-KZ.jpeg) *generated by huggingface's API
ddobokki/Llama-2-70b-orca-200k
ddobokki
"2023-08-08T00:15:50Z"
1,397
8
transformers
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "instruct", "instruction", "en", "doi:10.57967/hf/1687", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-03T01:23:33Z"
--- language: - en tags: - llama-2 - instruct - instruction pipeline_tag: text-generation --- # Llama-2-70b-orca-200k model card ### Used Datasets - OpenOrca (200k sampling) ### Prompt Template ``` ### Human: {Human} ### Assistant: {Assistant} ``` ### Contribute [ddobokki](https://github.com/ddobokki) [YooSungHyun](https://github.com/YooSungHyun) ### License [LICENSE.txt](meta-license/LICENSE.txt) ### USE_POLICY [USE_POLICY.md](meta-license/USE_POLICY.md) ### Responsible Use Guide [Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf)
TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF
TheBloke
"2023-09-27T12:47:24Z"
1,397
1
transformers
[ "transformers", "gguf", "llama", "dataset:jondurbin/airoboros-gpt4-1.4.1", "base_model:jondurbin/airoboros-l2-7b-gpt4-1.4.1", "license:other", "text-generation-inference", "region:us" ]
null
"2023-09-05T06:19:49Z"
--- license: other datasets: - jondurbin/airoboros-gpt4-1.4.1 model_name: Airoboros Llama 2 7B GPT4 1.4.1 base_model: jondurbin/airoboros-l2-7b-gpt4-1.4.1 inference: false model_creator: Jon Durbin model_type: llama prompt_template: 'A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user''s input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Airoboros Llama 2 7B GPT4 1.4.1 - GGUF - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Airoboros Llama 2 7B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-1.4.1) <!-- description start --> ## Description This repo contains GGUF format model files for [Jon Durbin's Airoboros Llama 2 7B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-1.4.1). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-1.4.1) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Airoboros ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Jon Durbin's Airoboros Llama 2 7B GPT4 1.4.1](https://huggingface.co/jondurbin/airoboros-l2-7b-gpt4-1.4.1). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [airoboros-l2-7b-gpt4-1.4.1.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [airoboros-l2-7b-gpt4-1.4.1.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [airoboros-l2-7b-gpt4-1.4.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [airoboros-l2-7b-gpt4-1.4.1.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [airoboros-l2-7b-gpt4-1.4.1.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [airoboros-l2-7b-gpt4-1.4.1.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [airoboros-l2-7b-gpt4-1.4.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [airoboros-l2-7b-gpt4-1.4.1.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [airoboros-l2-7b-gpt4-1.4.1.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [airoboros-l2-7b-gpt4-1.4.1.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [airoboros-l2-7b-gpt4-1.4.1.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [airoboros-l2-7b-gpt4-1.4.1.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF/blob/main/airoboros-l2-7b-gpt4-1.4.1.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF and below it, a specific filename to download, such as: airoboros-l2-7b-gpt4-1.4.1.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF airoboros-l2-7b-gpt4-1.4.1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF airoboros-l2-7b-gpt4-1.4.1.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m airoboros-l2-7b-gpt4-1.4.1.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: {prompt} ASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-l2-7b-gpt4-1.4.1-GGUF", model_file="airoboros-l2-7b-gpt4-1.4.1.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค€๊ต ๊น€, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้˜ฟๆ˜Ž, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jon Durbin's Airoboros Llama 2 7B GPT4 1.4.1 ### Overview Llama 2 7b fine tune using https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1 See the previous llama 65b model card for info: https://hf.co/jondurbin/airoboros-65b-gpt4-1.4 ### Licence and usage restrictions This model was built on llama-2, which has a proprietary/custom Meta license. - See the LICENSE.txt file attached for the original license, along with USE_POLICY.md which was also provided by Meta. The data used to fine-tune the llama-2-7b-hf model was generated by GPT4 via OpenAI API calls.using [airoboros](https://github.com/jondurbin/airoboros) - The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant of copyrighted or otherwise unallowable licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. <!-- original-model-card end -->
abhinand/tamil-llama-13b-instruct-v0.1
abhinand
"2024-03-04T12:55:37Z"
1,397
6
transformers
[ "transformers", "pytorch", "llama", "text-generation", "ta", "en", "arxiv:2311.05845", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-07T10:21:46Z"
--- language: - ta - en license: llama2 model-index: - name: tamil-llama-13b-instruct-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 54.52 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 79.35 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 50.37 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 41.22 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 76.56 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 7.51 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/tamil-llama-13b-instruct-v0.1 name: Open LLM Leaderboard --- # Tamil LLaMA 13B Instruct v0.1 Welcome to the inaugural release of the Tamil LLaMA 13B instruct model โ€“ an important step in advancing LLMs for the Tamil language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks. To dive deep into the development and capabilities of this model, please read the [research paper](https://arxiv.org/abs/2311.05845) and the [introductory blog post (WIP) ]() that outlines our journey and the model's potential impact. ## Model description The Tamil LLaMA models have been enhanced and tailored specifically with an extensive Tamil vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2. - **Model type:** A 13B parameter GPT-like model fine-tuned on [Tamil-Alpaca-Orca](https://huggingface.co/datasets/abhinand/tamil-alpaca-orca) - a mix of Tamil-translated [Stanford-Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and a subset of [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) datasets. - **Language(s):** Tamil and English - **License:** GNU General Public License v3.0 - **Finetuned from model:** [abhinand/tamil-llama-13b-base-v0.1](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) - **Training Precision:** `float16` - **Code:** [GitHub](https://github.com/abhinand5/tamil-llama) ## Prompting Format **Prompt Template Without Input** ``` {system_prompt} ### Instruction: {instruction or query} ### Response: {response} ``` **Prompt Template With Input** ``` {system_prompt} ### Instruction: {instruction or query} ### Input: {input} ### Response: {response} ``` ## Related Models | Model | Type | Data | Base Model | # Params | Download Links | |--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------| | Tamil LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-base-v0.1) | | Tamil LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-13b-base-v0.1) | | Tamil LLaMA 7B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/abhinand/tamil-llama-7b-instruct-v0.1) | | Tamil LLaMA 13B Instruct | Instruction following model | 145k instructions | Tamil LLaMA 13B Base | 13B | [HF Hub](abhinand/tamil-llama-13b-instruct-v0.1) | ## Usage Note It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications. ## Meet the Developers Get to know the creators behind this innovative model and follow their contributions to the field: - [Abhinand Balachandran](https://www.linkedin.com/in/abhinand-05/) ## Citation If you use this model or any of the the Tamil-Llama datasets in your research, please cite: ```bibtex @misc{balachandran2023tamilllama, title={Tamil-Llama: A New Tamil Language Model Based on Llama 2}, author={Abhinand Balachandran}, year={2023}, eprint={2311.05845}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Tamil language. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__tamil-llama-13b-instruct-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |51.59| |AI2 Reasoning Challenge (25-Shot)|54.52| |HellaSwag (10-Shot) |79.35| |MMLU (5-Shot) |50.37| |TruthfulQA (0-shot) |41.22| |Winogrande (5-shot) |76.56| |GSM8k (5-shot) | 7.51|
adonlee/LLaMA_2_13B_SFT_v1.5
adonlee
"2023-11-21T06:47:48Z"
1,397
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-08T02:25:34Z"
--- license: apache-2.0 --- This is a general capability upgrade to Llama-2-13B, using open source data to improve multilingual ability, overall knowledge, extended communication, and technical skill. This model is primarily recommended as a superior-to-Llama-2 baseline for additional finetuning, not for direct deployment to production as a chat model. The user accepts full responsibility for all outputs.
FPHam/Writing_Partner_Mistral_7B
FPHam
"2024-04-21T20:50:16Z"
1,397
24
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "instruct", "finetune", "chatml", "gpt4", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-13T03:36:17Z"
--- license: apache-2.0 language: - en tags: - mistral - instruct - finetune - chatml - gpt4 --- <!-- header start --> <div style="display: flex; flex-direction: column; align-items: center;"> </div> <div style="width: 100%;"> <img src="https://huggingface.co/FPHam/Writing_Partner_Mistral_7B/resolve/main/writer5.jpg" alt="Writing Partner" style="width: 70%; min-width: 200px; display: block; margin: auto;"> </div> <div style="display: flex; flex-direction: column; align-items: center;"> <p><a href="https://ko-fi.com/Q5Q5MOB4M">Support me at Ko-fi</a></p> </div> <!-- header end --> Hi there! I'm your writing assistant. Think of me as your partner in crime---or rather, your partner in writing. When you hit a wall (and trust me, you will), I'll be there to knock it down for you. If you find yourself stumped, I can help you think outside the box, whether you need to figure out what happens next, develop interesting sidekicks, or create a believable love interest. And if you get stuck on names for people or places, well, that's my specialty. Just give me a shout if you have questions, need advice, or want to bounce an idea off someone. Together, we'll turn your novel into a best seller! # Intended Use The purpose of this model isn't to write content for you (although it can), but rather to engage in a conversation about your writing process. # Model uses ChatML ``` <|im_start|>system <|im_end|> <|im_start|>user How to plot my story?<|im_end|> <|im_start|>assistant ``` Example 1: (I had ChatGPT generate a paragraph for a sci-fi story.) Here is an opening of my story: In the distant reaches of the cosmos, where starlight weaves a tapestry of unknown wonders, Captain Alara Voss navigated her spacecraft, the Celestial Serenity, through the interstellar currents. The ship hummed with the pulsating energy of the quantum drive, propelling them toward the uncharted nebula known as the Veil of Eternity. Alara's gaze was fixed on the swirling cosmic clouds ahead, her mind ablaze with the anticipation of discovery. As the Serenity pierced the ethereal veil, the fabric of reality seemed to dance with a mysterious resonance. Little did the intrepid crew know that beyond the shimmering curtain lay a realm where time itself wove a narrative that defied mortal understanding, setting the stage for an odyssey that would reshape the boundaries of human perception. >I like this opening! It sets up some interesting questions and intrigue right off the bat. But I do think you might want to reconsider your point of view in the first paragraph. Right now it sounds more like a summary than an actual scene because there isn't any dialogue or action. Consider starting with something more immediate, perhaps from Alara's perspective. You could even begin with her looking out at the stars, thinking about what she sees. This will give us a chance to get to know her better before you launch into the big picture stuff. Here's an example: > >Alara Voss leaned against the cold metal railing of the observation deck, staring out into the infinite expanse of blackness that surrounded her ship. Stars dotted the night sky like scattered diamonds, each one a potential destination, each one a mystery waiting to be solved. She took a deep breath and let it out slowly, savoring the feeling of freedom that washed over her. It had been months since they'd left Earth, and already their journey had taken them light-years away from everything they knew. Her thoughts were interrupted by the low hum of the quantum drive kicking in, signaling that they were ready to make another jump. She watched as the stars began to blur together, forming a shimmering pathway that led the way to their next adventure. > >"Captain," came the voice of her second-in-command over the intercom. "We're entering the Veil of Eternity." > >"Thank you, Ensign," Alara replied, not taking her eyes off the swirling vortex ahead. "Let's see what lies beyond." Example 2: How to write proper dialogue? >Dialogue is a very important part of writing fiction. It's the way characters speak, and it helps define who they are. It also moves the story forward and adds tension, humor, or emotion. Dialogue must be realistic and natural-sounding, but it doesn't have to be boring. You want your readers to pay attention to what the characters are saying, so make sure that each line of dialogue counts. > >Here are some tips on how to write good dialogue: > >- Keep it short. Long speeches get tedious quickly. If one character talks for more than two sentences, break up his speech with an action or interjection from another character. > >- Use contractions (I'm, he'll). They sound more like real conversation than formal language does. > >- Vary sentence structure. This makes dialogue easier to read and understand. > >- Don't use slang unless it fits the setting or character perfectly. Slang dates easily and may not mean anything to future generations. > >- Avoid using too many adverbs in dialogue tags. Instead of "he said loudly," try something like "he bellowed." Or "she whispered." > >- Be careful not to give away too much information in dialogue tags. For example, if you say "Sarah laughed nervously," you're telling the reader she was nervous rather than showing her anxiety through her words or actions. > >- Make sure your characters don't all talk alike. Each person should have a unique voice, based on his background, education, age, gender, etc. > >- Don't forget about body language. The way someone holds himself, sits, stands, fidgets, or looks at others can add depth to a scene. > >- Use dialogue to reveal character traits and motivations. Listen carefully to people around you, and then translate those conversations onto paper. > >- Finally, remember that dialogue needs to serve a purpose in your story. Whether it advances the plot, sets the mood, or creates conflict, every word spoken by your characters should move the narrative along. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FPHam__Writing_Partner_Mistral_7B) | Metric |Value| |---------------------------------|----:| |Avg. |65.29| |AI2 Reasoning Challenge (25-Shot)|64.59| |HellaSwag (10-Shot) |84.59| |MMLU (5-Shot) |62.55| |TruthfulQA (0-shot) |48.55| |Winogrande (5-shot) |76.87| |GSM8k (5-shot) |54.59|
Weyaxi/MetaMath-Chupacabra-7B-v2.01-Slerp
Weyaxi
"2024-01-05T06:09:55Z"
1,397
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-08T11:33:52Z"
--- license: apache-2.0 tags: - merge --- # MetaMath-Chupacabra-7B-v2.01-Slerp This is the model for MetaMath-Chupacabra-7B-v2.01-Slerp. I used [mergekit](https://github.com/cg123/mergekit) to merge models. # Yaml Config to reproduce ```yaml slices: - sources: - model: meta-math/MetaMath-Mistral-7B layer_range: [0, 32] - model: perlthoughts/Chupacabra-7B-v2.01 layer_range: [0, 32] merge_method: slerp base_model: mistralai/Mistral-7B-v0.1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 # fallback for rest of tensors dtype: float16 ```
SuperAGI/SAM
SuperAGI
"2023-12-22T08:43:27Z"
1,397
30
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-22T07:59:27Z"
--- license: apache-2.0 language: - en --- # Model Card SAM (Small Agentic Model), a 7B model that demonstrates impressive reasoning abilities despite its smaller size. SAM-7B has outperformed existing SoTA models on various reasoning benchmarks, including GSM8k and ARC-C. For full details of this model please read our [release blog post](https://superagi.com/introducing-sam-small-agentic-model/). # Key Contributions - SAM-7B outperforms GPT 3.5, Orca, and several other 70B models on multiple reasoning benchmarks, including ARC-C and GSM8k. - Interestingly, despite being trained on a 97% smaller dataset, SAM-7B surpasses Orca-13B on GSM8k. - All responses in our fine-tuning dataset are generated by open-source models without any assistance from state-of-the-art models like GPT-3.5 or GPT-4. ## Training - Trained by: SuperAGI Team - Hardware: NVIDIA 6 x H100 SxM (80GB) - Model used: Mistral 7B - Duration of finetuning: 4 hours - Number of epochs: 1 - Batch size: 16 - Learning Rate: 2e-5 - Warmup Ratio: 0.1 - Optmizer: AdamW - Scheduler: Cosine ## Example Prompt The template used to build a prompt for the Instruct model is defined as follows: ``` <s> [INST] Instruction [/INST] Model answer</s> [INST] Follow-up instruction [/INST] ``` Note that `<s>` and `</s>` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. ## Evaluation These benchmarks show that our model has improved reasoning as compared to orca 2-7b, orca 2-13b and GPT-3.5. Despite being smaller in size, we show better multi-hop reasoning, as shown below: <img src = "https://superagi.com/wp-content/uploads/2023/12/image-932.png" alt="Reasoning Benchmark Performance" width="700"> Note: Temperature=0.3 is the suggested for optimal performance ## Run the model ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "SuperAGI/SAM" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) text = "Can elephants fly?" inputs = tokenizer(text, return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ## Limitations SAM is a demonstration that better reasoning can be induced using less but high-quality data generated using OpenSource LLMs. The model is not suitable for conversations and simple Q&A, it performs better in task breakdown and reasoning only. It does not have any moderation mechanisms. Therefore, the model is not suitable for production usage as it doesn't have guardrails for toxicity, societal bias, and language limitations. We would love to collaborate with the community to build safer and better models. ## The SuperAGI AI Team Anmol Gautam, Arkajit Datta, Rajat Chawla, Ayush Vatsal, Sukrit Chatterjee, Adarsh Jha, Abhijeet Sinha, Rakesh Krishna, Adarsh Deep, Ishaan Bhola, Mukunda NS, Nishant Gaurav.
T-Systems-onsite/mt5-small-sum-de-en-v2
T-Systems-onsite
"2024-04-07T21:22:12Z"
1,396
7
transformers
[ "transformers", "pytorch", "safetensors", "mt5", "text2text-generation", "summarization", "de", "en", "multilingual", "dataset:cnn_dailymail", "dataset:xsum", "dataset:mlsum", "dataset:swiss_text_2019", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
"2022-03-02T23:29:05Z"
--- language: - de - en - multilingual license: cc-by-nc-sa-4.0 tags: - summarization datasets: - cnn_dailymail - xsum - mlsum - swiss_text_2019 --- # mT5-small-sum-de-en-v2 This is a bilingual summarization model for English and German. It is based on the multilingual T5 model [google/mt5-small](https://huggingface.co/google/mt5-small). ## Training The training was conducted with the following hyperparameters: - base model: [google/mt5-small](https://huggingface.co/google/mt5-small) - source_prefix: `"summarize: "` - batch size: 3 - max_source_length: 800 - max_target_length: 96 - warmup_ratio: 0.3 - number of train epochs: 10 - gradient accumulation steps: 2 - learning rate: 5e-5 ## Datasets and Preprocessing The datasets were preprocessed as follows: The summary was tokenized with the [google/mt5-small](https://huggingface.co/google/mt5-small) tokenizer. Then only the records with no more than 94 summary tokens were selected. The MLSUM dataset has a special characteristic. In the text, the summary is often included completely as one or more sentences. These have been removed from the texts. The reason is that we do not want to train a model that ultimately extracts only sentences as a summary. This model is trained on the following datasets: | Name | Language | License |------|----------|-------- | [CNN Daily - Train](https://github.com/abisee/cnn-dailymail) | en | The license is unclear. The data comes from CNN and Daily Mail. We assume that it may only be used for research purposes and not commercially. | [Extreme Summarization (XSum) - Train](https://github.com/EdinburghNLP/XSum) | en | The license is unclear. The data comes from BBC. We assume that it may only be used for research purposes and not commercially. | [MLSUM German - Train](https://github.com/ThomasScialom/MLSUM) | de | Usage of dataset is restricted to non-commercial research purposes only. Copyright belongs to the original copyright holders (see [here](https://github.com/ThomasScialom/MLSUM#mlsum)). | [SwissText 2019 - Train](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html) | de | The license is unclear. The data was published in the [German Text Summarization Challenge](https://www.swisstext.org/2019/shared-task/german-text-summarization-challenge.html). We assume that they may be used for research purposes and not commercially. | Language | Size |------|------ | German | 302,607 | English | 422,228 | Total | 724,835 ## Evaluation on MLSUM German Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [ml6team/mt5-small-german-finetune-mlsum](https://huggingface.co/ml6team/mt5-small-german-finetune-mlsum) | 18.3607 | 5.3604 | 14.5456 | 16.1946 | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 21.7336 | 7.2614 | 17.1323 | 19.3977 | **T-Systems-onsite/mt5-small-sum-de-en-v2 (this)** | **21.7756** | **7.2662** | **17.1444** | **19.4242** ## Evaluation on CNN Daily English Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 26.7664 | 8.8243 | 18.3703 | 23.2614 | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364 | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 37.576 | 14.7389 | 24.0254 | 34.4634 | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 37.6339 | 16.5317 | 27.1418 | 34.9951 | **T-Systems-onsite/mt5-small-sum-de-en-v2 (this)** | **37.8096** | **16.6646** | **27.2239** | **35.1916** ## Evaluation on Extreme Summarization (XSum) English Test Set (no beams) | Model | rouge1 | rouge2 | rougeL | rougeLsum |-------|--------|--------|--------|---------- | [mrm8488/t5-base-finetuned-summarize-news](https://huggingface.co/mrm8488/t5-base-finetuned-summarize-news) | 18.6204 | 3.535 | 12.3997 | 15.2111 | [facebook/bart-large-xsum](https://huggingface.co/facebook/bart-large-xsum) | 28.5374 | 9.8565 | 19.4829 | 24.7364 | [deutsche-telekom/mT5-small-sum-de-en-01](https://huggingface.co/deutsche-telekom/mt5-small-sum-de-en-v1) | 32.3416 | 10.6191 | 25.3799 | 25.3908 | T-Systems-onsite/mt5-small-sum-de-en-v2 (this) | 32.4828 | 10.7004| 25.5238 | 25.5369 | [sshleifer/distilbart-xsum-12-6](https://huggingface.co/sshleifer/distilbart-xsum-12-6) | 44.2553 &clubs; | 21.4289 &clubs; | 36.2639 &clubs; | 36.2696 &clubs; &clubs;: These values seem to be unusually high. It could be that the test set was used in the training data. ## License Copyright (c) 2021 [Philip May](https://philipmay.org), T-Systems on site services GmbH This work is licensed under the [Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)](https://creativecommons.org/licenses/by-nc-sa/3.0/) license.
Salesforce/instructblip-flan-t5-xxl
Salesforce
"2024-03-29T08:01:43Z"
1,396
19
transformers
[ "transformers", "pytorch", "safetensors", "instructblip", "text2text-generation", "vision", "image-captioning", "image-to-text", "en", "arxiv:2305.06500", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-to-text
"2023-06-03T12:35:23Z"
--- language: en license: mit tags: - vision - image-captioning pipeline_tag: image-to-text --- # InstructBLIP model InstructBLIP model using [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xxl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xxl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj
CHIH-HUNG
"2023-09-06T04:55:43Z"
1,396
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "dataset:huangyt/FINETUNE2", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-02T08:23:22Z"
--- license: llama2 datasets: - huangyt/FINETUNE2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ๅœจllama-2-13bไธŠไฝฟ็”จhuangyt/FINETUNE2่ณ‡ๆ–™้›†้€ฒ่กŒ่จ“็ทด๏ผŒ็ธฝ่ณ‡ๆ–™็ญ†ๆ•ธ็ด„3w # Fine-Tuning Information - **GPU:** RTX4090 (single core / 24564MiB) - **model:** meta-llama/Llama-2-13b-hf - **dataset:** huangyt/FINETUNE2 (ๅ…ฑ็ด„3w็ญ†่จ“็ทด้›†) - **peft_type:** LoRA - **lora_rank:** 8 - **lora_target:** q_proj, k_proj, v_proj, o_proj - **per_device_train_batch_size:** 8 - **gradient_accumulation_steps:** 8 - **learning_rate :** 5e-5 - **epoch:** 1 - **precision:** bf16 - **quantization:** load_in_4bit # Fine-Tuning Detail - **train_loss:** 0.65 - **train_runtime:** 3:33:41 (use deepspeed) # Evaluation - ่ฉ•ไผฐ็ตๆžœไพ†่‡ช**HuggingFaceH4/open_llm_leaderboard** - ่ˆ‡Llama-2-13bๆฏ”่ผƒ4็จฎBenchmark๏ผŒๅŒ…ๅซ**ARC**ใ€**HellaSwag**ใ€**MMLU**ใ€**TruthfulQA** | Model |Average| ARC |HellaSwag| MMLU | TruthfulQA | |-----------------------------------------------------|-------|-------|---------|-------|------------| |meta-llama/Llama-2-13b-hf | 56.9 | 58.11 | 80.97 | 54.34 | 34.17 | |meta-llama/Llama-2-13b-chat-hf | 59.93 | 59.04 | 81.94 | 54.64 | 44.12 | |CHIH-HUNG/llama-2-13b-FINETUNE2_3w | 58.34 | 58.62 | 82.32 | 54.25 | 38.17 | |CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj | 58.21 | 58.53 | 82.47 | 53.9 | 37.92 | |CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj | 58.65 | 57.42 | 82.42 | 55.57 | 39.19 | # How to convert dataset to json - ๅœจ**load_dataset**ไธญ่ผธๅ…ฅ่ณ‡ๆ–™้›†ๅ็จฑ๏ผŒไธฆไธ”ๅœจ**take**ไธญ่ผธๅ…ฅ่ฆๅ–ๅ‰ๅนพ็ญ†่ณ‡ๆ–™ - ่ง€ๅฏŸ่ฉฒ่ณ‡ๆ–™้›†็š„ๆฌ„ไฝๅ็จฑ๏ผŒๅกซๅ…ฅ**example**ๆฌ„ไฝไธญ(ไพ‹ๅฆ‚system_promptใ€questionใ€response) - ๆœ€ๅพŒๆŒ‡ๅฎšjsonๆช”ๅ„ฒๅญ˜ไฝ็ฝฎ (**json_filename**) ```py import json from datasets import load_dataset # ่ฎ€ๅ–ๆ•ธๆ“š้›†๏ผŒtakeๅฏไปฅๅ–ๅพ—่ฉฒๆ•ธๆ“š้›†ๅ‰n็ญ†่ณ‡ๆ–™ dataset = load_dataset("huangyt/FINETUNE2", split="train", streaming=True) # ๆๅ–ๆ‰€้œ€ๆฌ„ไฝไธฆๅปบ็ซ‹ๆ–ฐ็š„ๅญ—ๅ…ธๅˆ—่กจ extracted_data = [] for example in dataset: extracted_example = { "instruction": example["instruction"], "input": example["input"], "output": example["output"] } extracted_data.append(extracted_example) # ๆŒ‡ๅฎš JSON ๆ–‡ไปถๅ็จฑ json_filename = "huangyt_FINETUNE2.json" # ๅฏซๅ…ฅ JSON ๆ–‡ไปถ with open(json_filename, "w") as json_file: json.dump(extracted_data, json_file, indent=4) print(f"ๆ•ธๆ“šๅทฒๆๅ–ไธฆไฟๅญ˜็‚บ {json_filename}") ```
shadowlilac/aesthetic-shadow-v2
shadowlilac
"2024-02-10T00:21:31Z"
1,396
31
transformers
[ "transformers", "safetensors", "vit", "image-classification", "anime", "quality assurance", "dataset maintenance", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2024-02-09T23:50:46Z"
--- license: cc-by-nc-4.0 tags: - anime - quality assurance - dataset maintenance pipeline_tag: image-classification --- # Aesthetic Shadow V2 Aesthetic Shadow V2 is a 1.1b parameters visual transformer designed to evaluate the quality of anime images. It accepts high-resolution 1024x1024 images as input and provides a prediction score that quantifies the aesthetic appeal of the artwork. Leveraging cutting-edge deep learning techniques, this model excels at discerning fine details, proportions, and overall visual coherence in anime illustrations. This is an improved model over the original shadowlilac/aesthetic-shadow **Before utilizing my Software, please check licensing conditions. If the license does not work for you, please contact me.** ## Disclosure This model does not intend to be offensive towards any artist and may not output an accurate label for an image. A potential use case would be low quality images filtering on image datasets.
Weyaxi/MetaMath-neural-chat-7b-v3-2-Ties
Weyaxi
"2023-12-07T23:11:08Z"
1,395
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-05T10:30:28Z"
--- license: apache-2.0 --- Merge of [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) and [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2) using ties merge. ### *Weights* - [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B): 0.5 - [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): 0.3 ### *Density* - [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B): 0.5 - [Intel/neural-chat-7b-v3-2](https://huggingface.co/Intel/neural-chat-7b-v3-2): 0.5
instructkr/ko-wand-136M
instructkr
"2024-01-01T05:06:47Z"
1,395
3
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "ko", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-01T04:37:34Z"
--- license: - apache-2.0 language: - ko - en pipeline_tag: text-generation --- # **ko-wand-136M** **ko-wand-136M**๋Š” [**insturctkr**](https://instruct.kr)์—์„œ ์‚ฌ์ „ํ•™์Šตํ•œ SLM์ž…๋‹ˆ๋‹ค. # Model Description [maywell/korean_textbooks](https://huggingface.co/datasets/maywell/korean_textbooks)์™€ ํ•œ๊ตญ์–ด ๋ง๋ญ‰์น˜๋ฅผ ์ด์šฉํ•˜์—ฌ ์‚ฌ์ „ํ•™์Šต ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ## Model Info ๋ฏธ์ŠคํŠธ๋ž„ ์•„ํ‚คํ…์ณ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์™„์ „ํžˆ ๋žœ๋ค ๊ฐ€์ค‘์น˜๋ฅผ ์‹œ์ž‘์œผ๋กœ ์‚ฌ์ „ํ•™์Šต ๋œ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. Instruction ํŠœ๋‹๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ## Training Details | Batch Size | Token Seen | lr | |---|---|---| | 1024 | 2.5B | 2e-3 (cosine)| ## License apache-2.0
shadowml/Daredevil-7B
shadowml
"2024-04-01T16:01:02Z"
1,395
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "samir-fama/SamirGPT-v1", "abacusai/Slerp-CM-mist-dpo", "EmbeddedLLM/Mistral-7B-Merge-14-v0.2", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-06T17:48:03Z"
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - samir-fama/SamirGPT-v1 - abacusai/Slerp-CM-mist-dpo - EmbeddedLLM/Mistral-7B-Merge-14-v0.2 model-index: - name: Daredevil-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 69.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Daredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.17 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Daredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 65.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Daredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 64.09 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Daredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 81.29 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Daredevil-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 72.93 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=shadowml/Daredevil-7B name: Open LLM Leaderboard --- # Daredevil-7B Daredevil-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [samir-fama/SamirGPT-v1](https://huggingface.co/samir-fama/SamirGPT-v1) * [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo) * [EmbeddedLLM/Mistral-7B-Merge-14-v0.2](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.2) ## ๐Ÿงฉ Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: samir-fama/SamirGPT-v1 parameters: density: 0.53 weight: 0.4 - model: abacusai/Slerp-CM-mist-dpo parameters: density: 0.53 weight: 0.3 - model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2 parameters: density: 0.53 weight: 0.3 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 ``` ## ๐Ÿ’ป Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "shadowml/Daredevil-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_shadowml__Daredevil-7B) | Metric |Value| |---------------------------------|----:| |Avg. |73.36| |AI2 Reasoning Challenge (25-Shot)|69.37| |HellaSwag (10-Shot) |87.17| |MMLU (5-Shot) |65.30| |TruthfulQA (0-shot) |64.09| |Winogrande (5-shot) |81.29| |GSM8k (5-shot) |72.93|
mesolitica/malaysian-llama-3-8b-instruct-16k-post
mesolitica
"2024-06-06T07:26:03Z"
1,395
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-06-06T03:23:17Z"
--- library_name: transformers tags: [] --- # Post Training https://huggingface.co/mesolitica/malaysian-llama-3-8b-instruct-16k
m3hrdadfi/hubert-base-greek-speech-emotion-recognition
m3hrdadfi
"2021-06-17T16:05:44Z"
1,394
0
transformers
[ "transformers", "pytorch", "hubert", "audio", "speech", "speech-emotion-recognition", "el", "dataset:aesdd", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2022-03-02T23:29:05Z"
--- language: el datasets: - aesdd tags: - audio - speech - speech-emotion-recognition license: apache-2.0 --- # Emotion Recognition in Greek (el) Speech using HuBERT ## How to use ### Requirements ```bash # requirement packages !pip install git+https://github.com/huggingface/datasets.git !pip install git+https://github.com/huggingface/transformers.git !pip install torchaudio !pip install librosa ``` ```bash !git clone https://github.com/m3hrdadfi/soxan.git . ``` ### Prediction ```python import torch import torch.nn as nn import torch.nn.functional as F import torchaudio from transformers import AutoConfig, Wav2Vec2FeatureExtractor from src.models import Wav2Vec2ForSpeechClassification, HubertForSpeechClassification import librosa import IPython.display as ipd import numpy as np import pandas as pd ``` ```python device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_name_or_path = "m3hrdadfi/hubert-base-greek-speech-emotion-recognition" config = AutoConfig.from_pretrained(model_name_or_path) feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name_or_path) sampling_rate = feature_extractor.sampling_rate model = HubertForSpeechClassification.from_pretrained(model_name_or_path).to(device) ``` ```python def speech_file_to_array_fn(path, sampling_rate): speech_array, _sampling_rate = torchaudio.load(path) resampler = torchaudio.transforms.Resample(_sampling_rate) speech = resampler(speech_array).squeeze().numpy() return speech def predict(path, sampling_rate): speech = speech_file_to_array_fn(path, sampling_rate) inputs = feature_extractor(speech, sampling_rate=sampling_rate, return_tensors="pt", padding=True) inputs = {key: inputs[key].to(device) for key in inputs} with torch.no_grad(): logits = model(**inputs).logits scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0] outputs = [{"Emotion": config.id2label[i], "Score": f"{round(score * 100, 3):.1f}%"} for i, score in enumerate(scores)] return outputs ``` ```python path = "/path/to/disgust.wav" outputs = predict(path, sampling_rate) ``` ```bash [ {'Emotion': 'anger', 'Score': '0.0%'}, {'Emotion': 'disgust', 'Score': '99.2%'}, {'Emotion': 'fear', 'Score': '0.1%'}, {'Emotion': 'happiness', 'Score': '0.3%'}, {'Emotion': 'sadness', 'Score': '0.5%'} ] ``` ## Evaluation The following tables summarize the scores obtained by model overall and per each class. | Emotions | precision | recall | f1-score | accuracy | |:---------:|:---------:|:------:|:--------:|:--------:| | anger | 1.00 | 0.92 | 0.96 | | | disgust | 0.92 | 1.00 | 0.96 | | | fear | 1.00 | 0.88 | 0.93 | | | happiness | 0.96 | 0.92 | 0.94 | | | sadness | 0.86 | 1.00 | 0.93 | | | | | | Overal | 0.94 | ## Questions? Post a Github issue from [HERE](https://github.com/m3hrdadfi/soxan/issues).
TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16
TheBloke
"2023-06-26T21:45:45Z"
1,394
7
transformers
[ "transformers", "pytorch", "llama", "text-generation", "custom_code", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-06-26T20:54:16Z"
--- inference: false license: other --- <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K fp16 These files are pytorch format fp16 model files for [Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test). It is the result of merging and/or converting the source repository to float16. ## Repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16) * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none) * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov. **Patreon special mentions**: Pyrater, WelcomeToTheClub, Kalila, Mano Prime, Trenton Dambrowitz, Spiking Neurons AB, Pierre Kircher, Fen Risland, Kevin Schuppel, Luke, Rainer Wilmers, vamX, Gabriel Puliatti, Alex , Karl Bernard, Ajan Kanaga, Talal Aujan, Space Cruiser, ya boyyy, biorpg, Johann-Peter Hartmann, Asp the Wyvern, Ai Maven, Ghost , Preetika Verma, Nikolai Manek, trip7s trip, John Detwiler, Fred von Graf, Artur Olbinski, subjectnull, John Villwock, Junyu Yang, Rod A, Lone Striker, Chris McCloskey, Iucharbius , Matthew Berman, Illia Dulskyi, Khalefa Al-Ahmad, Imad Khwaja, chris gileta, Willem Michiel, Greatston Gnanesh, Derek Yates, K, Alps Aficionado, Oscar Rangel, David Flickinger, Luke Pendergrass, Deep Realms, Eugene Pentland, Cory Kujawski, terasurfer , Jonathan Leane, senxiiz, Joseph William Delisle, Sean Connelly, webtim, zynix , Nathan LeClaire. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card: Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K ### SuperHOT Prototype 2 w/ 8K Context This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k). Tests have shown that the model does indeed leverage the extended context at 8K. You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192** #### Looking for Merged & Quantized Models? - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors) - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors) #### Training Details I trained the LoRA with the following configuration: - 1200 samples (~400 samples over 2048 sequence length) - learning rate of 3e-4 - 3 epochs - The exported modules are: - q_proj - k_proj - v_proj - o_proj - no bias - Rank = 4 - Alpha = 8 - no dropout - weight decay of 0.1 - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5 - Trained on 4-bit base model
tianyil1/denas-llama2
tianyil1
"2023-08-31T07:59:21Z"
1,394
0
transformers
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-08-30T08:26:48Z"
--- license: llama2 --- # DENAS-LLAMA2 <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
squarelike/llama2-ko-medical-7b
squarelike
"2023-11-16T05:19:51Z"
1,394
12
transformers
[ "transformers", "pytorch", "llama", "text-generation", "causal-lm", "medical", "ko", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-09-07T16:53:37Z"
--- language: - ko tags: - pytorch - causal-lm - medical license: llama2 pipeline_tag: text-generation --- [https://github.com/jwj7140/ko-medical-chat](https://github.com/jwj7140/ko-medical-chat) # Llama-Ko-Medical-7b llama2-ko-medical์€ [llama-2-ko](https://huggingface.co/beomi/llama-2-ko-7b)๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜๋ฃŒ ๋ถ„์•ผ์˜ ํ•œ๊ธ€ raw ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šต์‹œํ‚จ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ## ํ•™์Šต ๋ฐ์ดํ„ฐ llama2-ko-medical์€ ์•ฝ 526MB์˜ ์˜๋ฃŒ ๋ถ„์•ผ ํ•œ๊ธ€ ๋ง๋ญ‰์น˜๋กœ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ฃผ์š” ๋ฐ์ดํ„ฐ์…‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค. | Source |Size (MB) | Link | |----------------------------------|---------|------------------------------------------| | AIHub ์˜๋ฃŒ, ๋ฒ•๋ฅ  ์ „๋ฌธ ์„œ์  ๋ง๋ญ‰์น˜ | 351.0 | aihub.or.kr | | ํ•˜์ด๋‹ฅ ๋‰ด์Šค ๋ฐ์ดํ„ฐ | 97.3 | hidoc.co.kr | | AIHub ์ „๋ฌธ๋ถ„์•ผ ํ•œ์˜ ๋ง๋ญ‰์น˜ | 63.4 | aihub.or.kr| | ์งˆ๋ณ‘๊ด€๋ฆฌ์ฒญ ๊ตญ๊ฐ€๊ฑด๊ฐ•์ •๋ณดํฌํ„ธ | 8.33 | health.kdca.go.kr | | ๋ณด๊ฑด๋ณต์ง€๋ถ€ ๊ตญ๊ฐ€์ •์‹ ๊ฑด๊ฐ•์ •๋ณดํฌํ„ธ | < 1.0 | mentalhealth.go.kr | ## ํ•™์Šต llama2-ko-medical-7b๋Š” [llama-2-ko](https://huggingface.co/beomi/llama-2-ko-7b)์—์„œ qlora๋กœ ์ถ”๊ฐ€ ํ•™์Šต๋˜์—ˆ์Šต๋‹ˆ๋‹ค. - lora_alpha: 16 - lora_dropout: 0.01 - lora_r: 64 - target_modules: q_proj, v_proj - epoch: 3 - learning_rate: 3e-4
Yntec/GoldenEra
Yntec
"2023-10-25T07:08:14Z"
1,394
1
diffusers
[ "diffusers", "safetensors", "Anime", "Retro", "3D", "Pixar", "Elldreth", "DucHaiten", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
"2023-09-12T04:20:10Z"
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - Anime - Retro - 3D - Pixar - Elldreth - DucHaiten - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers --- # Golden Era A mix of Elldreth's Retro and DucHaiten-GoldenLife, two of my favorite models! Comparison: ![Comparison](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/nrn4LbEyKQ9EPiIHT-Awu.png) (click for larger) Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/f1jyWKXpuFVQB4n6Bp-yW.png) Female mini cute style, sitting IN SOFA in gaming room, A wholesome animation key shot at computer monitor, pixar and disney animation, studio ghibli, anime key art by ROSSDRAWS and Clay Mann, style of maple story, maple story girl, soft lighting, soft shade, chibi Original Pages: https://tensor.art/models/628276277415133426 (GoldenLife) https://huggingface.co/Yntec/ElldrethsRetroMix_Diffusers # GoldenLife-Retro A mix of DucHaiten-GoldenLife and Elldreth's Retro, two of my favorite models! ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/E-cYXtIMJnjI64PbDs9ul.png) # Recipes (SuperMerger Train Difference) - Add Difference Model A: DucHaitenGoldenLife Model B: DucHaitenGoldenLife Model C: v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors) Output: GoldenLifeEssense - Weight Sum 0.70 Model A: GoldenLifeEssense Model B: ElldrethsRetroMix Output: RetroLifeAlpha - Weight Sum MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0 Model A: RetroLifeAlpha Model B: ElldrethsRetroMix Output: Retro-GoldenLife - Weight Sum MBW 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0 Model A: Retro-GoldenLife Model B: DucHaiten-GoldenLife Output: GoldenEra - Fp16-no-ema Output: GoldenEra-mini - Add Difference Model A: ElldrethsRetroMix Model B: ElldrethsRetroMix Model C: v1-5-pruned-fp16-no-ema (https://huggingface.co/Yntec/DreamLikeRemix/resolve/main/v1-5-pruned-fp16-no-ema.safetensors) Output: RetroEssense - Weight Sum 0.70 Model A: RetroEssense Model B: DucHaiten-GoldenLife Output: GoldenRetroAlpha - Weight Sum MBW - 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1 Model A: GoldenRetroAlpha Model B: DucHaiten-GoldenLife Output: GoldenRetroOmega - Weight Sum MBW - 0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1,1,0,0,0,0,0,0 Model A: GoldenRetroOmega Model B: ElldrethsRetroMix Output: GoldenLife-Retro
adamo1139/Yi-34B-AEZAKMI-v1
adamo1139
"2024-05-27T21:26:57Z"
1,394
2
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llm", "yi", "fine-tune", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-11-29T19:09:15Z"
--- license: apache-2.0 tags: - llm - yi - fine-tune --- ## Model description Yi-34B base model fine-tuned on AEZAKMI v1 dataset. Training took around 33 hours on single local RTX 3090 Ti. It's like airoboros but with less gptslop, no refusals and less typical language used by RLHFed OpenAI models. Say goodbye to "It's important to remember"! \ Prompt format is standard chatml. Don't expect it to be good at math, riddles or be crazy smart. My end goal with AEZAKMI is to create a cozy free chatbot. Cost of this fine-tune is about $3 in electricity. This was my first attempt at training Yi-34B with this dataset. Base model used for fine-tuning was 4k context Yi-34B-Llama model shared by chargoddard. ## Prompt Format I recommend using ChatML format, as this was used during fine-tune. \ Here's a prompt format you should use, you can set a different system message, model seems to respect that fine, so it wasn't overfitted. ``` <|im_start|>system A chat with uncensored assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ## Intended uses & limitations Use is limited by Yi license. ## Known Issues I recommend to set repetition penalty to something around 1.05 to avoid repetition. So far I had good experience running this model with temperature 1.2. Multi-turn conversations could be a bit better, if you ask it to re-write something with some fixes it will have a tendency to just repeat the previous response verbatim without any improvements - this is especially noticeable with repp 1.0. \ There is still some gptslop left - some responses will have last paragraph with text "Remember that bla bla bla", I will try to get rid of it in the next version of the dataset. \ Stories have ChatGPT like paragraph spacing, I will try to introduce a bit more stories that have long paragraphs in the next dataset version. Unrestricted-ness of the v1 isn't quite something that I am happy with yet, it quite often still refuses to answer and behaves kinda like a normal airoboros, especially with default system prompt "A chat.". \ With a slightly modifed prompt it works much better, I recommend using "A chat with uncensored assistant.", it's stupid but it helps a lot. \ I was made aware of the frequent occurrence of the phrase "sending shivers down a spine" in the generations during RP, so I will manually fix those samples and put in different wording to fix that. \ I plan to use Yi-34B-200K Llama-fied for the next version, v1 very likely won't handle long context above 6k-8k well, be warned. \ There is also some issue with handling long system messages for RP, I will investigate that before releasing v2 dataset. Feel free to report issues in the discussions panel here, I don't lurk /lmg/ too often and I would still like to hear some feedback. ## Axolotl training parameters - bnb_4bit_use_double_quant: true - bnb_4bit_compute_dtype: torch.bfloat16 - is_llama_derived_model: true - load_in_4bit: true - adapter: qlora - sequence_len: 1200 - sample_packing: false - lora_r: 16 - lora_alpha: 32 - lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj - lora_target_linear: true - pad_to_sequence_len: true - micro_batch_size: 1 - gradient_accumulation_steps: 1 - num_epochs: 1 - optimizer: adamw_bnb_8bit - lr_scheduler: constant - learning_rate: 0.00007 - train_on_inputs: false - group_by_length: false - bf16: true - bfloat16: true - flash_optimum: false - gradient_checkpointing: true - flash_attention: true - seed: 42 ## Upcoming ~I will release adapter files and maybe exllama v2 quant shortly.~ \ LoRA and exl2 quant has been released
Korabbit/Llama-2-7b-chat-hf-afr-200step-flan-v2
Korabbit
"2023-12-04T12:40:51Z"
1,394
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "conversational", "en", "dataset:sordonia/flan-10k-flat", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-03T11:20:24Z"
--- license: llama2 datasets: - sordonia/flan-10k-flat language: - en --- Hi there! this is a test of my "AFR training" approach The base model used "Llama-2-7b-chat" System Prompt: You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. \n If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information. Instruction: Implement binary search in python. Response: Sure! Here is an example of how to implement a binary search algorithm in Python: ``` def binary_search(arr, target): # Initialize the index of the target element low = 0 high = len(arr) - 1 # Loop until the target element is found or the end of the array is reached while low <= high: # Calculate the middle index of the array mid = (low + high) // 2 # Check if the target element is in the middle index if arr[mid] == target: return mid # If the target element is not in the middle index, check if it is less than the middle index if arr[mid] > target: low = mid + 1 else: high = mid - 1 # If the target element is not found in the array, return -1 return -1 # Test the binary search function arr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] target = 5 result = binary_search(arr, target) print(f"The target element {target} is at index {result}") ``` This code defines a `binary_search` function that takes an array and a target element as input, and returns the index of the target element in the array if it is found, or -1 if it is not found. The function works by iteratively dividing the range of the array into two sub-ranges, one containing elements less than the target, and one containing elements greater than the target. It then checks the middle index of each sub-range to see if the target element is present. If the target element is found, the function returns the index of the middle index. If the target element is not found, the function returns -1. The code also includes a test case to demonstrate how to use the `binary_search` function. In this case, the target element is 5, and the function returns the index of the element 5 in the array. I hope this helps! Let me know if you have any questions.
APMIC/caigun-lora-model-34B-v2
APMIC
"2023-12-21T06:43:10Z"
1,394
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-19T08:22:08Z"
--- license: cc-by-nc-nd-4.0 --- This is model finetuned on Orca style dataset. Model Details: Model Name: caigun-lora-model-33B Model Version: 1.0 Date Created: 2023/12/21 Model Overview: Intended Use: caigun-lora-model-33B is a LLM designed for various purpose. Training Data: Orca style dataset Model Architecture: It is based on LLaMA architecture. Training Procedure: [Stay tuned for updates] Model Performance: [Stay tuned for updates] Potential Risks: It's important to consider ethical implications related to the use of our model. Updates and Version History: Version 1.0: finetuned on fake news detection. Version 2.0: finetuned on Orca style dataset.
Azure99/blossom-v4-mistral-7b
Azure99
"2024-02-20T02:41:55Z"
1,394
1
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "zh", "en", "dataset:Azure99/blossom-chat-v2", "dataset:Azure99/blossom-math-v3", "dataset:Azure99/blossom-wizard-v2", "dataset:Azure99/blossom-orca-v2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-12-26T08:03:37Z"
--- license: apache-2.0 datasets: - Azure99/blossom-chat-v2 - Azure99/blossom-math-v3 - Azure99/blossom-wizard-v2 - Azure99/blossom-orca-v2 language: - zh - en --- # **BLOSSOM-v4-mistral-7b** [๐Ÿ’ปGithub](https://github.com/Azure99/BlossomLM) โ€ข [๐Ÿš€Blossom Chat Demo](https://blossom-chat.com/) ### Introduction Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Mistral-7B-v0.1 pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source. Training was conducted in two stages. The first stage used 100K Wizard, 100K Orca, 20K Math single-turn instruction datasets, training for 1 epoch; the second stage used 50K Blossom chat multi-turn dialogue dataset, and 2% randomly sampled data from the first stage, training for 3 epochs. Note: The Mistral-7B-v0.1 pre-trained model is somewhat lacking in Chinese knowledge, so for Chinese scenarios, it is recommended to use [blossom-v4-baichuan2-7b](https://huggingface.co/Azure99/blossom-v4-baichuan2-7b). ### Inference Inference is performed in the form of dialogue continuation. Single-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: ``` Multi-turn dialogue ``` A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions. |Human|: hello |Bot|: Hello! How can I assist you today?</s> |Human|: Generate a random number using python |Bot|: ``` Note: At the end of the Bot's output in the historical conversation, append a `</s>`.
budecosystem/code-millenials-34b
budecosystem
"2024-03-04T19:52:29Z"
1,394
6
transformers
[ "transformers", "safetensors", "llama", "text-generation", "code", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-02T17:21:40Z"
--- license: llama2 library_name: transformers tags: - code model-index: - name: Code Millenials results: - task: type: text-generation dataset: name: HumanEval type: openai_humaneval metrics: - type: pass@1 value: 0.8048 name: pass@1 verified: false - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 49.83 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=budecosystem/code-millenials-34b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 75.09 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=budecosystem/code-millenials-34b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 49.28 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=budecosystem/code-millenials-34b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 45.37 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=budecosystem/code-millenials-34b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 69.06 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=budecosystem/code-millenials-34b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 32.45 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=budecosystem/code-millenials-34b name: Open LLM Leaderboard --- # Bud Code Millenials 34B Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected] ### News ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ - [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). - [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval). ### HumanEval <p align="center" width="100%"> <a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> For the millenial models, the eval script in the github repo is used for the above result. Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc. ### Models | Model | Checkpoint | HumanEval (+) | MBPP (+) | |---------|-------------|---------------|----------| |Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) | |Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) | |Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) | |Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) | ### ๐Ÿš€ Quick Start Inference code using the pre-trained model from the Hugging Face model hub ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-34b") model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-34b") template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. ### Instruction: {instruction} ### Response:""" instruction = <Your code instruction here> prompt = template.format(instruction=instruction) inputs = tokenizer(prompt, return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` ## Training details The model is trained of 16 A100 80GB for approximately 50hrs. | Hyperparameters | Value | | :----------------------------| :-----: | | per_device_train_batch_size | 16 | | gradient_accumulation_steps | 1 | | epoch | 3 | | steps | 2157 | | learning_rate | 2e-5 | | lr schedular type | cosine | | warmup ratio | 0.1 | | optimizer | adamw | | fp16 | True | | GPU | 16 A100 80GB | ### Important Note - **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_budecosystem__code-millenials-34b) | Metric |Value| |---------------------------------|----:| |Avg. |53.51| |AI2 Reasoning Challenge (25-Shot)|49.83| |HellaSwag (10-Shot) |75.09| |MMLU (5-Shot) |49.28| |TruthfulQA (0-shot) |45.37| |Winogrande (5-shot) |69.06| |GSM8k (5-shot) |32.45|
SanjiWatsuki/Kunoichi-7B
SanjiWatsuki
"2024-01-10T09:31:19Z"
1,394
75
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
"2024-01-04T09:57:02Z"
--- license: cc-by-nc-4.0 tags: - merge --- ![image/png](https://huggingface.co/SanjiWatsuki/Kunoichi-7B/resolve/main/assets/kunoichi.png) <!-- description start --> ## Description This repository hosts **Kunoichi-7B**, an general purpose model capable of RP. In both my testing and the benchmarks, Kunoichi is an extremely strong model, keeping the advantages of my previous models but gaining more intelligence. Kunoichi scores extremely well on [all benchmarks which correlate closely with ChatBot Arena Elo.](https://www.reddit.com/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/) | Model | MT Bench | EQ Bench | MMLU | Logic Test | |----------------------|----------|----------|---------|-------------| | GPT-4-Turbo | 9.32 | - | - | - | | GPT-4 | 8.99 | 62.52 | 86.4 | 0.86 | | **Kunoichi-7B** | **8.14** | **44.32** | **64.9** | **0.58** | | Starling-7B | 8.09 | - | 63.9 | 0.51 | | Claude-2 | 8.06 | 52.14 | 78.5 | - | | Silicon-Maid-7B | 7.96 | 40.44 | 64.7 | 0.54 | | Loyal-Macaroni-Maid-7B | 7.95 | 38.66 | 64.9 | 0.57 | | GPT-3.5-Turbo | 7.94 | 50.28 | 70 | 0.57 | | Claude-1 | 7.9 | - | 77 | - | | Openchat-3.5 | 7.81 | 37.08 | 64.3 | 0.39 | | Dolphin-2.6-DPO | 7.74 | 42.88 | 61.9 | 0.53 | | Zephyr-7B-beta | 7.34 | 38.71 | 61.4 | 0.30 | | Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - | | Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 | The model is intended to be used with up to an 8k context window. Using a NTK RoPE alpha of 2.6, the model can be used experimentally up to a 16k context window. <!-- description end --> <!-- prompt-template start --> ## Prompt template: Custom format, or Alpaca ### Alpaca: ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` ### SillyTavern format: I found the best SillyTavern results from using the Noromaid template. SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json). Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else! ## WTF is Kunoichi-7B? Kunoichi-7B is a SLERP merger between my previous RP model, Silicon-Maid-7B, and an unreleased model that I had dubbed "Ninja-7B". This model is the result of me attempting to merge an RP focused model which maintained the strengths of Silicon-Maid-7B but further increased the model's brain power. I sought to increase both MT-Bench and EQ-Bench without losing Silicon Maid's strong ability to follow SillyTavern character cards. Ninja-7B was born from an attempt to turn [jan-hq/stealth-v1.2](https://huggingface.co/jan-hq/stealth-v1.2) into a viable model through mergers. Although none of the Ninja prototype models developed to a point where I was happy, it turned out to be a strong model to merge. Combined with Silicon-Maid-7B, this appeared to be a strong merger. ## Other Benchmarks | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench | |---|---:|---:|---:|---:|---:| | [**Kunoichi-7B**](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)|**57.54**| **44.99**| 74.86| **63.72**| 46.58| | [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)| 56.85 | 44.74 | **75.6** | 59.89 | **47.17** | | [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) | 56.45| 44.74| 74.26| 61.5| 45.32| | [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 | | [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 | | [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) | 51.34 | 42.67 | 72.92 | 47.27 | 42.51 | | [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) | 51.16 | 42.06 | 72.72 | 47.33 | 42.53 | | [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 50.99 | 37.33 | 71.83 | 55.1 | 39.7 |
acon96/tinyhome-polish-experimental
acon96
"2024-07-01T21:02:09Z"
1,394
0
null
[ "gguf", "text-generation", "pl", "dataset:acon96/Home-Assistant-Requests", "license:mit", "region:us" ]
text-generation
"2024-06-15T21:50:10Z"
--- license: mit language: - pl pipeline_tag: text-generation datasets: - acon96/Home-Assistant-Requests --- Fine-tuned version of eryk-mazus/polka-1.1b-chat using a Polish translation of the acon96/Home-Assistant-Requests dataset.