modelId
stringlengths
5
122
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]
downloads
int64
0
738M
likes
int64
0
11k
library_name
stringclasses
245 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
48 values
createdAt
timestamp[us, tz=UTC]
card
stringlengths
1
901k
ISTA-DASLab/Meta-Llama-3-8B-Instruct-AQLM-2Bit-1x16
ISTA-DASLab
2024-05-13T18:14:58Z
413
10
transformers
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-3", "conversational", "text-generation-inference", "arxiv:2401.06118", "autotrain_compatible", "endpoints_compatible", "aqlm", "region:us" ]
text-generation
2024-04-21T20:45:22Z
--- library_name: transformers tags: - llama - facebook - meta - llama-3 - conversational - text-generation-inference --- Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [meta-llama/Meta-Llama-3-8B-Instruct ](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct). For this quantization, we used 1 codebook of 16 bits. Results: | Model | Quantization | MMLU (5-shot) | GSM8k (8-shot) | ArcC| ArcE| Hellaswag | Winogrande | PiQA | Model size, Gb | |------|------|-------|------|------|------|------|------|------|------| |meta-llama/Meta-Llama-3-8B-Instruct| None |0.6560 | 0.7475 | 0.5299 | 0.8165 | 0.5771 | 0.7867 | 0.7206 | 16.1 | | | 1x16 | 0.5872 | 0.5087 | 0.4590 | 0.7710 | 0.5491 | 0.7726 | 0.6953 | 4.1 | **UPD** 02.05.2024 The version of model with improved fine-tuning procedure.
adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.2-fft
adalbertojunior
2024-04-22T21:41:40Z
413
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "pt", "dataset:adalbertojunior/dolphin_pt_test", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-22T20:40:07Z
--- datasets: - adalbertojunior/dolphin_pt_test language: - pt --- ## Como Utilizar ``` import transformers import torch model_id = "adalbertojunior/Llama-3-8B-Instruct-Portuguese-v0.2-fft" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "Você é um robô pirata que sempre responde como um pirata deveria!"}, {"role": "user", "content": "Quem é você?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|im_end|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ### Formato do prompt ``` <|im_start|>system Você é um assistente útil com respostas curtas.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
mmnga/YuisekinAIEvol-Mistral-7B-ja-math-v0.1.1-gguf
mmnga
2024-04-29T15:52:08Z
413
2
null
[ "gguf", "en", "ja", "dataset:TFMC/imatrix-dataset-for-japanese-llm", "license:apache-2.0", "region:us" ]
null
2024-04-29T14:18:07Z
--- license: apache-2.0 language: - en - ja datasets: - TFMC/imatrix-dataset-for-japanese-llm --- # YuisekinAIEvol-Mistral-7B-ja-math-v0.1.1-gguf [yuisekiさんが公開しているYuisekinAIEvol-Mistral-7B-ja-math-v0.1.1](https://huggingface.co/yuiseki/YuisekinAIEvol-Mistral-7B-ja-math-v0.1.1)のggufフォーマット変換版です。 imatrixのデータは[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)を使用して作成しました。 ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'YuisekinAIEvol-Mistral-7B-ja-math-v0.1.1-Q4_0.gguf' -p "[INST] 今晩の夕食のレシピを教えて [/INST] " -n 128 ```
pszemraj/jamba-900M-v0.13-KIx2
pszemraj
2024-05-18T19:27:54Z
413
3
transformers
[ "transformers", "safetensors", "jamba", "text-generation", "smol MoE", "smol", "en", "dataset:BEE-spoke-data/knowledge-inoc-concat-v1", "dataset:BEE-spoke-data/wikipedia-20230901.en-deduped", "dataset:BEE-spoke-data/fineweb-100k_en-med", "dataset:BEE-spoke-data/fineweb-1M_en-med", "dataset:BEE-spoke-data/fineweb-1M_longish", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2024-05-06T06:28:12Z
--- license: apache-2.0 tags: - jamba - smol MoE - smol metrics: - accuracy datasets: - BEE-spoke-data/knowledge-inoc-concat-v1 - BEE-spoke-data/wikipedia-20230901.en-deduped - BEE-spoke-data/fineweb-100k_en-med - BEE-spoke-data/fineweb-1M_en-med - BEE-spoke-data/fineweb-1M_longish language: - en inference: false --- # jamba-900M-v0.13-KIx2 <a href="https://colab.research.google.com/gist/pszemraj/62d037d0d93656ef2101d7e29e3b7220/jamba-test-sandbox.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> > The API widget is off as it isn't supported by hf yet - try the Colab This is a pretraining experiment on the `jamba` arch as a "smol MoE". Details: - pretrained at context length 16384 - seen approx 20b tokens - uses Claude3 tokenizer (as hf GPT2 tokenizer) - hidden size 1024, 12 layers, 8 experts achieves the following results on the evaluation set (_most recent dataset_): - Loss: 3.0366 - Accuracy: 0.4514 - Num Input Tokens Seen: 1975517184 if I pretrain it further, other versions will be in new repos with incremented version (this is v0.13) ## Quick eval Quick eval for: pszemraj/jamba-H1024_L12-v0.13-KIx2 hf (pretrained=pszemraj/jamba-H1024_L12-v0.13-KIx2,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: 0.9999, num_fewshot: None, batch_size: 8 | Tasks |Version|Filter|n-shot| Metric | Value | |Stderr| |--------------|------:|------|-----:|----------|-------:|---|-----:| |winogrande | 1|none | 0|acc | 0.5067|± |0.0141| |piqa | 1|none | 0|acc | 0.5912|± |0.0138| | | |none | 0|acc_norm | 0.5951|± |0.0138| |openbookqa | 1|none | 0|acc | 0.1800|± |0.0172| | | |none | 0|acc_norm | 0.2920|± |0.0204| |lambada_openai| 1|none | 0|perplexity|103.1241|± |8.5843| | | |none | 0|acc | 0.2502|± |0.0122| |boolq | 2|none | 0|acc | 0.6196|± |0.0136| |arc_easy | 1|none | 0|acc | 0.3836|± |0.0137| | | |none | 0|acc_norm | 0.3694|± |0.0136| ## example outputs ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60bccec062080d33f875cd0c/wky-qjUtS0AJ6YtIsJh3T.png) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 80085 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 3.2013 | 0.4241 | 200 | 3.0653 | 0.4479 | 419430400 | | 3.1976 | 0.8481 | 400 | 3.0434 | 0.4506 | 838860800 | | 3.1485 | 1.2722 | 600 | 3.0375 | 0.4513 | 1258291200 | | 3.1871 | 1.6963 | 800 | 3.0366 | 0.4514 | 1677721600 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
TroyDoesAI/Phi-3-Context-Obedient-RAG
TroyDoesAI
2024-05-18T14:44:30Z
413
20
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-05-08T15:19:30Z
--- license: cc-by-sa-4.0 --- Base Model : microsoft/Phi-3-mini-128k-instruct Overview This model is meant to enhance adherence to provided context (e.g., for RAG applications) and reduce hallucinations, inspired by airoboros context-obedient question answer format. --- license: cc-by-4.0 --- # Contextual DPO ## Overview The format for a contextual prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the expected response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` ### References in response As shown in the example, the dataset includes many examples of including source details in the response, when the question asks for source/citation/references. Why do this? Well, the R in RAG seems to be the weakest link in the chain. Retrieval accuracy, depending on many factors including the overall dataset size, can be quite low. This accuracy increases when retrieving more documents, but then you have the issue of actually using the retrieved documents in prompts. If you use one prompt per document (or document chunk), you know exactly which document the answer came from, so there's no issue. If, however, you include multiple chunks in a single prompt, it's useful to include the specific reference chunk(s) used to generate the response, rather than naively including references to all of the chunks included in the prompt. For example, suppose I have two documents: ``` url: http://foo.bar/1 Strawberries are tasty. url: http://bar.foo/2 The cat is blue. ``` If the question being asked is `What color is the cat?`, I would only expect the 2nd document to be referenced in the response, as the other link is irrelevant.
Bloof/distilgpt2-finetuned-furryirl2
Bloof
2024-05-09T05:11:30Z
413
0
transformers
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-09T03:47:17Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned-furryirl2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-furryirl2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.3031 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.8899 | 1.0 | 852 | 3.5930 | | 3.7515 | 2.0 | 1704 | 3.4929 | | 3.6564 | 3.0 | 2556 | 3.4250 | | 3.6221 | 4.0 | 3408 | 3.3787 | | 3.5517 | 5.0 | 4260 | 3.3441 | | 3.5009 | 6.0 | 5112 | 3.3200 | | 3.5012 | 7.0 | 5964 | 3.3080 | | 3.4741 | 8.0 | 6816 | 3.3031 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
mradermacher/Llama-3-Refueled-Pruned-GGUF
mradermacher
2024-05-22T18:50:34Z
413
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "dataset:yahma/alpaca-cleaned", "base_model:QuietImpostor/Llama-3-Refueled-Pruned", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-05-22T18:07:17Z
--- base_model: QuietImpostor/Llama-3-Refueled-Pruned datasets: - yahma/alpaca-cleaned language: - en library_name: transformers license: llama3 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hfhfix --> <!-- ### vocab_type: --> static quants of https://huggingface.co/QuietImpostor/Llama-3-Refueled-Pruned <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q2_K.gguf) | Q2_K | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.IQ3_XS.gguf) | IQ3_XS | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q3_K_S.gguf) | Q3_K_S | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.IQ3_S.gguf) | IQ3_S | 2.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.IQ3_M.gguf) | IQ3_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q3_K_L.gguf) | Q3_K_L | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.IQ4_XS.gguf) | IQ4_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q4_K_M.gguf) | Q4_K_M | 3.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q5_K_S.gguf) | Q5_K_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q5_K_M.gguf) | Q5_K_M | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q6_K.gguf) | Q6_K | 4.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.Q8_0.gguf) | Q8_0 | 6.3 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Llama-3-Refueled-Pruned-GGUF/resolve/main/Llama-3-Refueled-Pruned.f16.gguf) | f16 | 11.8 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF
mradermacher
2024-05-24T01:03:22Z
413
0
transformers
[ "transformers", "gguf", "not-for-all-audiences", "en", "base_model:crestf411/llama-3-daybreak-v0.3-70b-hf", "endpoints_compatible", "region:us" ]
null
2024-05-22T22:54:06Z
--- base_model: crestf411/llama-3-daybreak-v0.3-70b-hf language: - en library_name: transformers quantized_by: mradermacher tags: - not-for-all-audiences --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/crestf411/llama-3-daybreak-v0.3-70b-hf <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/llama-3-daybreak-v0.3-70b-hf-i1-GGUF/resolve/main/llama-3-daybreak-v0.3-70b-hf.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf
RichardErkhov
2024-05-27T19:33:30Z
413
0
null
[ "gguf", "region:us" ]
null
2024-05-27T17:31:57Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged - GGUF - Model creator: https://huggingface.co/dhmeltzer/ - Original model: https://huggingface.co/dhmeltzer/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q2_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q2_K.gguf) | Q2_K | 2.36GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ3_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ3_S.gguf) | IQ3_S | 2.75GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ3_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ3_M.gguf) | IQ3_M | 2.9GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K.gguf) | Q3_K | 3.07GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_0.gguf) | Q4_0 | 3.56GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_K.gguf) | Q4_K | 3.8GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q4_1.gguf) | Q4_1 | 3.95GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_0.gguf) | Q5_0 | 4.33GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_K.gguf) | Q5_K | 4.45GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_1.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q5_1.gguf) | Q5_1 | 4.72GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q6_K.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q6_K.gguf) | Q6_K | 5.15GB | | [Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q8_0.gguf](https://huggingface.co/RichardErkhov/dhmeltzer_-_Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged-gguf/blob/main/Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged.Q8_0.gguf) | Q8_0 | 6.67GB | Original model description: # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_dhmeltzer__Llama-2-7b-hf-eli5-cleaned-1024_qlora_merged) | Metric | Value | |-----------------------|---------------------------| | Avg. | 44.13 | | ARC (25-shot) | 53.67 | | HellaSwag (10-shot) | 78.21 | | MMLU (5-shot) | 45.9 | | TruthfulQA (0-shot) | 46.13 | | Winogrande (5-shot) | 73.8 | | GSM8K (5-shot) | 4.7 | | DROP (3-shot) | 6.53 |
zhentaoyu/Llama-2-7b-chat-hf-Q4_0-GGUF
zhentaoyu
2024-06-12T05:28:55Z
413
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-2", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
text-generation
2024-06-12T05:28:38Z
--- language: - en license: llama2 tags: - facebook - meta - pytorch - llama - llama-2 - llama-cpp - gguf-my-repo base_model: meta-llama/Llama-2-7b-chat-hf extra_gated_heading: You need to share contact information with Meta to access this model extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means\ \ the terms and conditions for use, reproduction, distribution and modification\ \ of the Llama Materials set forth herein. \n\"Documentation\" means the specifications,\ \ manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\ \ \n\"Licensee\" or \"you\" means you, or your employer or any other person or\ \ entity (if you are entering into this Agreement on such person or entity's behalf),\ \ of the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf. \n\"Llama 2\"\ \ means the foundational large language models and software and algorithms, including\ \ machine-learning model code, trained model weights, inference-enabling code, training-enabling\ \ code, fine-tuning enabling code and other elements of the foregoing distributed\ \ by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.\n\"Llama\ \ Materials\" means, collectively, Meta's proprietary Llama 2 and documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\ \nBy clicking \"I Accept\" below or by using or distributing any portion or element\ \ of the Llama Materials, you agree to be bound by this Agreement.\n1. License Rights\ \ and Redistribution. \na. Grant of Rights. You are granted a non-exclusive, worldwide,\ \ non- transferable and royalty-free limited license under Meta's intellectual property\ \ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\ \ distribute, copy, create derivative works of, and make modifications to the Llama\ \ Materials. \nb. Redistribution and Use.\ni. If you distribute or make the Llama\ \ Materials, or any derivative works thereof, available to a third party, you shall\ \ provide a copy of this Agreement to such third party. \nii. If you receive Llama\ \ Materials, or any derivative works thereof, from a Licensee as part of an integrated\ \ end user product, then Section 2 of this Agreement will not apply to you. \n\ iii. You must retain in all copies of the Llama Materials that you distribute the\ \ following attribution notice within a \"Notice\" text file distributed as a part\ \ of such copies: \"Llama 2 is licensed under the LLAMA 2 Community License, Copyright\ \ (c) Meta Platforms, Inc. All Rights Reserved.\"\niv. Your use of the Llama Materials\ \ must comply with applicable laws and regulations (including trade compliance\ \ laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials\ \ (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated\ \ by reference into this Agreement.\nv. You will not use the Llama Materials or\ \ any output or results of the Llama Materials to improve any other large language\ \ model (excluding Llama 2 or derivative works thereof). \n\n2. Additional Commercial\ \ Terms. If, on the Llama 2 version release date, the monthly active users of the\ \ products or services made available by or for Licensee, or Licensee's affiliates,\ \ is greater than 700 million monthly active users in the preceding calendar month,\ \ you must request a license from Meta, which Meta may grant to you in its sole\ \ discretion, and you are not authorized to exercise any of the rights under this\ \ Agreement unless or until Meta otherwise expressly grants you such rights.\n\ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS\ \ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \"AS IS\" BASIS, WITHOUT\ \ WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION,\ \ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A\ \ PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS\ \ OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED\ \ WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation\ \ of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY\ \ OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE,\ \ ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,\ \ CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS\ \ AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials.\nb. Subject to Meta's ownership of Llama Materials and derivatives\ \ made by or for Meta, with respect to any derivative works and modifications of\ \ the Llama Materials that are made by you, as between you and Meta, you are and\ \ will be the owner of such derivative works and modifications.\nc. If you institute\ \ litigation or other proceedings against Meta or any entity (including a cross-claim\ \ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs\ \ or results, or any portion of any of the foregoing, constitutes infringement\ \ of intellectual property or other rights owned or licensable by you, then any\ \ licenses granted to you under this Agreement shall terminate as of the date such\ \ litigation or claim is filed or instituted. You will indemnify and hold harmless\ \ Meta from and against any claim by any third party arising out of or related \ \ to your use or distribution of the Llama Materials.\n6. Term and Termination.\ \ The term of this Agreement will commence upon your acceptance of this Agreement\ \ or access to the Llama Materials and will continue in full force and effect until\ \ terminated in accordance with the terms and conditions herein. Meta may terminate\ \ this Agreement if you are in breach of any term or condition of this Agreement.\ \ Upon termination of this Agreement, you shall delete and cease use of the Llama\ \ Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\ \ \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed\ \ under the laws of the State of California without regard to choice of law principles,\ \ and the UN Convention on Contracts for the International Sale of Goods does not\ \ apply to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement. \n### Llama 2 Acceptable Use Policy\n\ Meta is committed to promoting safe and fair use of its tools and features, including\ \ Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy\ \ (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\n\ #### Prohibited Uses\nWe want everyone to use Llama 2 safely and responsibly. You\ \ agree you will not use, or allow others to use, Llama 2 to:\n1. Violate the law\ \ or others’ rights, including to:\n 1. Engage in, promote, generate, contribute\ \ to, encourage, plan, incite, or further illegal or unlawful activity or content,\ \ such as: \n 1. Violence or terrorism \n 2. Exploitation or harm\ \ to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4.\ \ The illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6.\ \ Any other criminal activity\n 2. Engage in, promote, incite, or facilitate\ \ the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n \ \ 4. Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices \n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any\ \ action or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system \n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Llama 2 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Llama 2 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Llama 2 or outputs are human-generated\n\ \ 6. Generating or facilitating false online engagement, including fake reviews\ \ and other means of fake online engagement \n 4. Fail to appropriately disclose\ \ to end users any known dangers of your AI system \nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means: \n * Reporting issues with\ \ the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\n\ \ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\ \ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\ \ \n * Reporting violations of the Acceptable Use Policy or unlicensed uses of\ \ Llama: [[email protected]](mailto:[email protected])" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit pipeline_tag: text-generation --- # zhentaoyu/Llama-2-7b-chat-hf-Q4_0-GGUF This model was converted to GGUF format from [`meta-llama/Llama-2-7b-chat-hf`](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo zhentaoyu/Llama-2-7b-chat-hf-Q4_0-GGUF --hf-file llama-2-7b-chat-hf-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo zhentaoyu/Llama-2-7b-chat-hf-Q4_0-GGUF --hf-file llama-2-7b-chat-hf-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo zhentaoyu/Llama-2-7b-chat-hf-Q4_0-GGUF --hf-file llama-2-7b-chat-hf-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo zhentaoyu/Llama-2-7b-chat-hf-Q4_0-GGUF --hf-file llama-2-7b-chat-hf-q4_0.gguf -c 2048 ```
Ali-C137/Q2AW1M-0010
Ali-C137
2024-06-21T20:41:08Z
413
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-21T13:59:48Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
qnixsynapse/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF
qnixsynapse
2024-06-22T13:16:05Z
413
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:mistralai/Mistral-7B-Instruct-v0.3", "license:apache-2.0", "region:us" ]
null
2024-06-22T13:15:42Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.3 license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # qnixsynapse/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo qnixsynapse/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo qnixsynapse/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo qnixsynapse/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo qnixsynapse/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -c 2048 ```
versae/Gromenauer-7B-Q4_K_M-GGUF
versae
2024-06-24T21:49:49Z
413
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "es", "dataset:fistro/gromenauer", "base_model:bertin-project/Gromenauer-7B", "license:apache-2.0", "region:us" ]
text-generation
2024-06-24T21:49:31Z
--- base_model: bertin-project/Gromenauer-7B datasets: - fistro/gromenauer language: - es license: apache-2.0 pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # versae/Gromenauer-7B-Q4_K_M-GGUF This model was converted to GGUF format from [`bertin-project/Gromenauer-7B`](https://huggingface.co/bertin-project/Gromenauer-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/bertin-project/Gromenauer-7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo versae/Gromenauer-7B-Q4_K_M-GGUF --hf-file gromenauer-7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo versae/Gromenauer-7B-Q4_K_M-GGUF --hf-file gromenauer-7b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo versae/Gromenauer-7B-Q4_K_M-GGUF --hf-file gromenauer-7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo versae/Gromenauer-7B-Q4_K_M-GGUF --hf-file gromenauer-7b-q4_k_m.gguf -c 2048 ```
jeiku/Aura_Qwen2_v3_7B-Q4_K_M-GGUF
jeiku
2024-06-28T05:13:19Z
413
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:jeiku/Aura_Qwen2_v3_7B", "endpoints_compatible", "region:us" ]
null
2024-06-28T05:12:59Z
--- base_model: jeiku/Aura_Qwen2_v3_7B library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # jeiku/Aura_Qwen2_v3_7B-Q4_K_M-GGUF This model was converted to GGUF format from [`jeiku/Aura_Qwen2_v3_7B`](https://huggingface.co/jeiku/Aura_Qwen2_v3_7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeiku/Aura_Qwen2_v3_7B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo jeiku/Aura_Qwen2_v3_7B-Q4_K_M-GGUF --hf-file aura_qwen2_v3_7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo jeiku/Aura_Qwen2_v3_7B-Q4_K_M-GGUF --hf-file aura_qwen2_v3_7b-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo jeiku/Aura_Qwen2_v3_7B-Q4_K_M-GGUF --hf-file aura_qwen2_v3_7b-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo jeiku/Aura_Qwen2_v3_7B-Q4_K_M-GGUF --hf-file aura_qwen2_v3_7b-q4_k_m.gguf -c 2048 ```
Davlan/bert-base-multilingual-cased-finetuned-amharic
Davlan
2021-06-02T12:37:53Z
412
2
transformers
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Hugging Face's logo --- language: am datasets: --- # bert-base-multilingual-cased-finetuned-amharic ## Model description **bert-base-multilingual-cased-finetuned-amharic** is a **Amharic BERT** model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fine-tuning **bert-base-multilingual-cased** model on Amharic language texts. It provides **better performance** than the multilingual Amharic on named entity recognition datasets. Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Amharic corpus using Amharic vocabulary. ## Intended uses & limitations #### How to use You can use this model with Transformers *pipeline* for masked token prediction. ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-amharic') >>> unmasker("የአሜሪካ የአፍሪካ ቀንድ ልዩ መልዕክተኛ ጄፈሪ ፌልትማን በአራት አገራት የሚያደጉትን [MASK] መጀመራቸውን የአሜሪካ የውጪ ጉዳይ ሚንስቴር አስታወቀ።") ``` #### Limitations and bias This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. ## Training data This model was fine-tuned on [Amharic CC-100](http://data.statmt.org/cc-100/) ## Training procedure This model was trained on a single NVIDIA V100 GPU ## Eval results on Test set (F-score, average over 5 runs) Dataset| mBERT F1 | am_bert F1 -|-|- [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 0.0 | 60.89 ### BibTeX entry and citation info By David Adelani ``` ```
kohbanye/pixel-art-style
kohbanye
2023-01-27T11:30:59Z
412
54
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "stable-diffusion-diffusers", "en", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-12-18T07:27:52Z
--- language: - en thumbnail: "https://huggingface.co/kohbanye/pixel-art-style/resolve/main/sample.png" tags: - stable-diffusion - text-to-image - stable-diffusion-diffusers --- # Pixel Art Style This is a fine-tuned model of Stable Diffusion. <br> Add token **pixelartstyle** to your prompt. ![pixelart](./sample.png) _an astronaut riding a horse, pixelartstyle_
BridgeTower/bridgetower-large-itm-mlm-gaudi
BridgeTower
2023-02-15T22:55:57Z
412
1
transformers
[ "transformers", "pytorch", "bridgetower", "gaudi", "en", "dataset:conceptual_captions", "dataset:conceptual_12m", "dataset:sbu_captions", "dataset:visual_genome", "dataset:mscoco_captions", "arxiv:2206.08657", "arxiv:1504.00325", "license:mit", "endpoints_compatible", "region:us" ]
null
2023-01-26T19:48:46Z
--- language: en tags: - bridgetower - gaudi license: mit datasets: - conceptual_captions - conceptual_12m - sbu_captions - visual_genome - mscoco_captions --- # BridgeTower large-itm-mlm-gaudi model The BridgeTower model was proposed in "BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning" by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The model was pretrained on English language using masked language modeling (MLM) and image text matching (ITM)objectives. It was introduced in [this paper](https://arxiv.org/pdf/2206.08657.pdf) and first released in [this repository](https://github.com/microsoft/BridgeTower). BridgeTower got accepted to [AAAI'23](https://aaai.org/Conferences/AAAI-23/). ## Model description The abstract from the paper is the following: Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BridgeTower, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the cross-modal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BridgeTower achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BridgeTower achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BridgeTower achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. ## Intended uses & limitations ### How to use Here is how to use this model to perform image and text matching: ```python from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval import requests from PIL import Image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") # forward pass scores = dict() for text in texts: # prepare inputs encoding = processor(image, text, return_tensors="pt") outputs = model(**encoding) scores[text] = outputs.logits[0,1].item() ``` Here is how to use this model to perform masked language modeling: ```python from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000360943.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") text = "a <mask> looking out of the window" processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-gaudi") # prepare inputs encoding = processor(image, text, return_tensors="pt") # forward pass outputs = model(**encoding) results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) print(results) #.a cat looking out of the window. ``` ## Training data The BridgeTower model was pretrained on four public image-caption datasets: - [Conceptual Captions (CC3M)](https://ai.google.com/research/ConceptualCaptions/) - [Conceptual 12M (CC12M)](https://github.com/google-research-datasets/conceptual-12m) - [SBU Captions](https://www.cs.rice.edu/~vo9/sbucaptions/) - [MSCOCO Captions](https://arxiv.org/pdf/1504.00325.pdf) - [Visual Genome](https://visualgenome.org/) The total number of unique images in the combined data is around 14M. ## Training procedure ### Pretraining The model was pre-trained for 10 epochs on an Intel AI supercomputing cluster using 512 Gaudis and 128 Xeons with a batch size of 2048. The optimizer used was AdamW with a learning rate of 1e-7. No data augmentation was used except for center-crop. The image resolution in pre-training is set to 294 x 294. ## Evaluation results Please refer to [Table 5](https://arxiv.org/pdf/2206.08657.pdf) for BridgeTower's performance on Image Retrieval and other downstream tasks. ### BibTeX entry and citation info ```bibtex @article{xu2022bridge, title={BridgeTower: Building Bridges Between Encoders in Vision-Language Representation Learning}, author={Xu, Xiao and Wu, Chenfei and Rosenman, Shachar and Lal, Vasudev and Che, Wanxiang and Duan, Nan}, journal={arXiv preprint arXiv:2206.08657}, year={2022} } ```
erfanzar/PGT-1B-2EP
erfanzar
2023-05-15T07:24:05Z
412
2
transformers
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "en", "fr", "fa", "nl", "dataset:OpenAssistant/oasst1", "dataset:erfanzar/CC-H2OAI-OASST-1-TRAIN", "dataset:erfanzar/CC-OASST-1-TRAIN", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-04-25T21:09:24Z
--- license: apache-2.0 datasets: - OpenAssistant/oasst1 - erfanzar/CC-H2OAI-OASST-1-TRAIN - erfanzar/CC-OASST-1-TRAIN language: - en - fr - fa - nl metrics: - bertscore pipeline_tag: text-generation --- # OpenSourceTransformers-OST Project [OST-OpenSourceTransformers Github](https://github.com/erfanzar/OST-OpenSourceTransformers) ## Hello community this model is only 1B but you can call it somehow an SOTA this model can also run on 4 GB GPU RAM and know dialogs as well ### Train Parametes - learning-rate : 2e-4 - sc : cosine lr - device : T4 GPU * 4 - batch-size: AutoFind - train time 12 H - max sequence length: 1024 - epochs : 2 ## Usage Code ```python from transformers import AutoTokenizer, AutoModelForCausalLM from IPython.display import clear_output import textwrap tokenizer = AutoTokenizer.from_pretrained("erfanzar/PGT-1B-2EP") model = AutoModelForCausalLM.from_pretrained("erfanzar/PGT-1B-2EP",device_map='auto',load_in_8bit=True) verify_text = lambda txt : '\n'.join([textwrap.fill(txt, width=110) for txt in txt.split('\n')]) def ppp(text:str): """ pre processing prompt """ return f"<|prompter|> {text} <|endoftext|><|assistant|>" def generate(text,max_new_tokens:int=1024,use_ppp:bool=False,b_pair=False): text = ppp(text) if use_ppp else text for i in range(max_new_tokens): enc = tokenizer(text,return_tensors='pt',add_special_tokens=False) text_r = text enc = model.generate(enc.input_ids,max_new_tokens=1,pad_token_id=0) text = tokenizer.decode(enc[0],skip_special_tokens=False) text = text[:-4]+tokenizer.eos_token if text[-4:] == '\n\n\n\n' else text if text.endswith(tokenizer.eos_token) or text.endswith('\n\n\n\n\n'): yield text[len(text_r):] if b_pair else text break else: yield text[len(text_r):] if b_pair else text for v in generate('what is a gpu',512,True): clear_output(wait=True) print(verify_text(v),end='') ``` # Pythia-1B ## Model Details ### Pretrained Model - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - FineTuned Languages: English , Persian , French, And Dutch - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedures, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ## NOTE The Pythia Suite is **NOT** intended for deployment. It is not in itself a product and cannot be used for human-facing interactions. For example, the model may generate harmful or offensive text... and also remember that this model is not good enough for Persian, French, and Dutch at least for this version
prognosis/cardio-pdf-text-chunks-v2
prognosis
2023-06-04T19:24:17Z
412
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-06-04T19:21:14Z
Entry not found
sazyou-roukaku/LittleStepMix
sazyou-roukaku
2023-07-04T10:47:46Z
412
32
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "ja", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-06-25T06:57:42Z
--- license: creativeml-openrail-m language: - ja library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - text-to-image --- License:[CreativeML Open RAIL-M](https://huggingface.co/sazyou-roukaku/LittleStepMix/blob/main/license_v1.txt)<br> Additional Copyright: sazyou_roukaku (TwitterID [@sazyou_roukaku](https://twitter.com/sazyou_roukaku)) as of June 25, 2023<br> このモデルは『CreativeML Open RAIL-M』でLicenseそのものに変更はありません。<br> しかし追加著作者として佐城郎画の名前が追加されています。<br> なお『CreativeML Open RAIL-M』に記載されている通り、<br> 本モデルを使用しての生成物に関してはLicenseの使用制限Aの事例を除き、当方は一切関与致しません。<br> 犯罪目的利用や医療用画像など特定専門的な用途での利用は使用制限Aで禁止されています。<br> 必ず確認しご利用ください。<br> また当方は一切責任を持ちません。免責されていることをご了承の上、ご使用ください。<br> <br> このCheckPointのダウンロード・使用は上記CreativeML Open RAIL-M Licenseを確認の上、<br> 同意したという前提及び契約に基づくものと判断されます。<br> <h4>更新履歴</h4> <ul> <li>6/25 LittleStepMix_v1公開</li> <li>7/1 LittleStepMix_A・B・C公開</li> <li>7/3 LittleStepMix_A、Textencoder変更前をアップしていた為、削除し変更済版を再公開</li> </ul> <h4>制限</h4> <div class="px-2"> <table class="table-fixed border mt-0 text-xs"> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> 著作者表記を入れずにモデルを使用する<br> Use the model without crediting the creator </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルで生成した画像を商用利用する<br> Sell images they generate </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> 商用画像生成サービスに、このモデルを使用する<br> Run on services that generate images for money </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルを使用したマージモデルを共有・配布する<br> Share merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデル、または派生モデルを販売する<br> Sell this model or merges using this model </td> </tr> <tr> <td class="align-middle px-4 w-8"> <span class="text-green-500"> <h5>OK</h5> </span> </td> <td> このモデルをマージしたモデルに異なる権限を設定する<br> Have different permissions when sharing merges </td> </tr> </table> </div> なお、上記のモデルそのものの販売や商用画像生成サービスへの利用は、<br> 『CreativeML Open RAIL-M』のLicense上、使用制限Aに追記記載しない限り、<br> 制限することが本来できない為、マージ者への負担も考慮し、civitai制限表記上OKとしているだけであり、<br> 積極的な推奨は行っておらず、またそれにより何らかの問題が生じても当方は一切責任を持ちません。<br> その点、ご留意いただくようお願いいたします。<br> <br> <h2>LittleStepMix_v1 マージ利用モデル一覧</h2> <ul> <li><a href="https://civitai.com/models/4384">dreamshaper_6BakedVae</a> ©Lykon</li> <li><a href="https://civitai.com/models/25694">epicrealism_newAge</a> ©epinikion</li> <li><a href="https://civitai.com/models/1169">sxd_v10</a> ©izuek</li> <li><a href="https://huggingface.co/haor/Evt_V4-preview">Evt_V4_e04_ema</a> ©haor</li> <li><a href="https://huggingface.co/Crosstyan/BPModel">bp_mk5</a> ©Crosstyan</li> <li><a href="https://huggingface.co/naclbit/trinart_characters_19.2m_stable_diffusion_v1">trinart_characters_it4_v1</a> ©Sta, AI Novelist Dev <a href="https://ai-novel.com/">(https://ai-novel.com/)</a> @ Bit192, Inc.</li> </ul> <h2>LLittleStepMix_A・B・C追加マージ利用モデル</h2> <ul> <li><a href="https://huggingface.co/Ai-tensa/FlexWaifu">FlexWaifuRainbow</a> <a href="https://twitter.com/Ai_tensa">©Ai-tensa</a></li> <li><a href="https://huggingface.co/hakurei/waifu-diffusion-v1-3">wd-v1-3-float16</a> developed by Anthony Mercurio, Salt, and Cafe</a></li> </ul> <p></p> -------------------------------------------------------------------------- <h4>サンプル</h4> <img src="https://huggingface.co/sazyou-roukaku/LittleStepMix/resolve/main/sample/002.jpg" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> (gyaru:1.3),high resolution,ultra-detail,solo,short shirt and short shorts,locker room, (cowboy shot:1.2),sexy smile,blonde long hair, Negative prompt: (worst quality:2),(low quality:1.4),(manicure:1.5),(long neck:2),lip Steps: 30 Sampler: DPM++ 2M Karras CFG scale: 7 Seed: 3358380436 </pre> <img src="https://huggingface.co/sazyou-roukaku/LittleStepMix/resolve/main/sample/001.jpg" width="100%" height="100%"> <pre style="white-space: pre-line;" class="w-full"> 1girl,handsome face,cool beauty,high resolution,ultra-detail,solo,punk tee and cargo pants, london street, (cowboy shot:1.2),happy smile,black short hair, Negative prompt: (worst quality:2),(low quality:1.4),(manicure:1.5),(long neck:2),lip Steps: 30 Sampler: DPM++ 2M Karras CFG scale: 7 Seed: 269540596 </pre> -------------------------------------------------------------------------- <div> <h3>詳細</h3> <p> <div class="px-2"> <div class="border p-2"> <details> <summary><h4>LittleStepMix_A・B・C</h4></summary> CLIP設定/clip skip:2<br> 推奨VAE/mse840000_klf8anime_klf8anime2.vae<br> もしくはフォルダ内にあるsr_SDv2vae_kl-f8anime2.safetensors<br> sr_SDv2vae_kl-f8anime2.safetensorsはSD2VAEとkl-f8anime2を私がマージしたVAEです。<br> LittleStepMix_A、LittleStepMix_B、LittleStepMix_Cは焼き込みなしのNoVAEです。<br> ClearVAEは1.0がNAIVAEの影響があると記載があり、それ以降のVersionも出所不明の為、コンセプト的に推奨していません。<br> <br> 1人の時はsoloをプロンプトで入れないと、変なコマ割り画像のような表示になりやすい傾向があります。 SD1.4から確認されているのですが、ACertainty系は特にこの傾向が強いので、1人の場合はsoloと指定推奨。<br> NFSWはそこそこまでは普通に出せます。 </details> </div> </div> <div class="px-2"> <div class="border p-2"> <details> <summary><h4>LittleStepMix_v1</h4></summary> CLIP設定/clip skip:2<br> 推奨VAE/mse840000_klf8anime_klf8anime2.vae<br> もしくはフォルダ内にあるsr_SDv2vae_kl-f8anime2.safetensorsが好みです。<br> sr_SDv2vae_kl-f8anime2.safetensorsはSD2VAEとkl-f8anime2を私がマージしたVAEです。<br> <br> なおLittleStepMix_v1はSD1.xのデフォルトVAEが標準焼き込み済みです。<br> <br> 自然言語(文章)プロンプトだと、より顔のリアル化が強くなる傾向がみられます。<br> 単文プロンプトでの利用を推奨します。<br> なおイラストモデル・フォトリアルモデルも含めた今後の自己ベース素材モデルとしての公開の側面も強く、現状詳しい能力は検証中です。ご了承ください。<br> なおベースがフォトリアルモデルやセミリアルモデルの為、エフェクト系はかなり弱い印象です。<br> </details> </div> </div> <h3>FAQ</h3> <h4>Q1:LittleStepMixとは何か</h4> A1:<br> 現在イラストマージモデルはleakモデルの混入の問題が懸念され、次々に公開停止が相次ぐなど萎縮ムードに入っています。<br> 当モデルは比較的安牌と思われる訓練モデルを主軸とし、今後調整予定のモデルの基礎として作っています。<br> 完全に混入がないとは断言できないものの、マージ素材は表記のもの以外一切使用していない点。(add用のSD1.4、SD1.5は除く)<br> 訓練モデルのみでのマージである点から、比較的低リスクのラインを目指しています。<br> 基本的には企業モデル等が投入されるなどの時代までの繋ぎとしての役割です。<br> 恐れ入りますが、全てのマージ素材を確認の上、ご利用は自己でご判断ください。<br> <br> *7/1追記* LittleStepMix_A・B・CはLittleStepMixを土台としてイラストモデル化しました。<br> マージ素材として自由にご利用いただいて問題ありません。<br> <h4>Q2:各学習モデル選定基準について</h4> A2:<br> *7/1追記* sampleフォルダ内に、Anything-V3.0を基準として、<br> Baka-DiffusionV1(fp16)、sd-v1-4、LittleStepMixシリーズ4種及び主軸モデルであるdreamshaperで現状最古の公開モデル<br> dreamshaper_252をランダムSeedで10回、<br> IN01-02,04-05,07-08/OUT03-11のcosine一致率を出力したファイルを公開いたします。<br> Anything-V3.0に対し、SD1.4は概ね84%ほど一致。<br> dreamshaper_252.safetensorsで88%。LittleStepMixシリーズは概ね89%程度の一致率です。<br> Baka-DiffusionV1を採用しなかった理由もこの数値にあります。<br> 下記のASimilarityCalculatiorをベースに、ランダムSeedで、合計ではなく各数値を出せるよう改良したものを用いています。<br> ご参考までに。 <br> <br> <br> ①dreamshaper_6BakedVae<br> 本モデルの<strong>主軸</strong>となっている訓練モデルです。<br> 訓練モデルの表記があり、複数の商用画像生成サービスでも利用されている為、一定の信頼性が担保されていると判断しています。<br> <strong>アスカテストなどでの類似性は基本的にdreamshaper_6BakedVae由来</strong>です。<br> <br> ②sxd_v10<br> v0.8と違い、v1.0公開日はリーク後ながらSD1.5ベースの訓練モデルで、リアリティ重視の為、総合的に判断。<br> 人体構造強化と将来NFSWモデル化を行う際の補強として採用。<br> <br> ③epicrealism_newAge<br> 現行の訓練モデルで最強のスペックを誇ると思われるモデル。<br> 背景補強と能力の高さから採用。<br> 最新ではないのは、他の私のフォトリアルモデルとの兼ね合いと、コントラストなどの兼ね合いからepicrealism_newAgeを選択。<br> <br> ④Evt_V4_e04_ema<br> ACertaintyというleakデータを含まないと公言しているイラスト学習モデルでトレーニングを行い生み出されたモデル。<br> 絵柄の主軸モデル。e04採用はあまり根拠がない。絵柄的に一番好みなので採用し、フォトリアルモデルでこちらを利用しているので変更すると管理が面倒になるという理由だけです。<br> ※ACertaintyはNOVEL AIのデータを蒸留している可能性はありますが、こちらは特許法に抵触しない為、問題ないと考えています。<br> ACertainty<br> <a href="https://huggingface.co/JosephusCheung/ACertainty">https://huggingface.co/JosephusCheung/ACertainty</a><br> <a href="https://huggingface.co/JosephusCheung/ASimilarityCalculatior">https://huggingface.co/JosephusCheung/ASimilarityCalculatior</a><br> <br> ⑤bp_mk5<br> ACertaintyベースの訓練モデル。上記同様。<br> <br> ⑥trinart_characters_it4_v1<br> AIのべりすとで有名な会社が公開してくださったモデルなので一番信頼性があります。<br> イラスト要素補強として使用しています。<br> <br> ⑦FlexWaifuRainbow<br> モデルの逆マージ解析コードを公開するなども行っている天紗愛氏がWD1.3に追加学習を施したモデル。<br> ACertaintyベースのモデル採用に当たり、ACertaintyの解析結果なども参考にさせていただいています。<br> 連続出力時の絵柄の安定性と信頼性から選ばせていただきました。 <br> <h3>Q3:今回の制限に問題や矛盾はないのか</h3> <h4>A3:</h4> <strong>dreamshaper_6BakedVae</strong> はcivitaiのパーミションが、 <strong>OK:Have different permissions when sharing merges</strong>となっており解除可能。<br> 他は制限なしの為、今回全て制限なしとし公開しております。<br> <br> なおマージ利用モデル側にLicense変更・制限変更等が生じた際も<br> ダウンロード時点のLicenseや制限を前提として公開している為、creativeml-openrail-mに準じます。<br> こちらはLittleStepMIXMerge_LicenseSS_v1に該当モデルのSSを保管しております。<br> ただしhuggingface公開のモデルはSSよりリポジトリのほうが信頼性が高いので、保管しておりません。<br> <br> なおマージ利用モデル側に重大な問題が発生した場合は、モデルの公開停止を行い、<br> 利用停止を呼びかける可能性はありますが、<strong>当方側を理由とした追加制限を設けることは致しません。</strong> </div>
zlsl/l_soft_erotic_tm
zlsl
2024-02-24T08:55:18Z
412
2
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "chat", "porn", "sex", "erotic", "roleplay", "ru", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-07-31T11:57:26Z
--- license: cc-by-nc-sa-4.0 language: - ru library_name: transformers tags: - chat - porn - sex - erotic - roleplay widget: - text: "Я: (улыбаюсь)" - text: "Аня: Ох!" pipeline_tag: text-generation --- Модель для эротического ролеплея. Неплохо держит конекст, параметры для старта: Также доступна следующая версия - [https://huggingface.co/zlsl/l_erotic_kink_chat](https://huggingface.co/zlsl/l_erotic_kink_chat) do_sample: true<br> top_p: 0.96 # Выше 1.2 - будет сильно уходить от контекста<br> top_k: 14<br> temperature: 0.88<br> repetition_penalty: 1.16<br> encoder_repetition_penalty: 1.09-1.15<br> typical_p: 1.0<br> Для чата желательно останавливать после '\n', также ставьте более 5 попыток генерации и ожидаемое количество новых токенов > 150, тогда диалоги будут интереснее. Очень желательно в контексте и во время диалога указывать действия и мысли в скобках. Например: Привет (вхожу в комнату, закрываю дверь) Датасет подготовлен по технологии TiMe Оптимизированный фронтенд для данных моделей в режиме чата под Android - https://github.com/zlsl/pocketai ## Для пользователей text-generation-webui В инструменте поломана работа с GPT-2, GPTJ, GPT-NEO и аналогичными модлями, неверно загружается токенизер. Ошибка такая:<br> >eos_token_id = eos_token_id[0] >IndexError: list index out of range Исправляется легко, в файл modules/models.py в функцию load_tokenizer() надо добавить строчку<br> <code>tokenizer.eos_token_id = 2</code><br> перед<br> <code>return tokenizer</code>
artificialguybr/LineAniRedmond-LinearMangaSDXL
artificialguybr
2023-10-07T04:13:08Z
412
9
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-12T00:21:43Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: LineAniAF widget: - text: LineAniAF --- # LineAni.Redmond ![row01](00001-940815997.png) Download V2 HERE: https://huggingface.co/artificialguybr/LineAniRedmond-LinearMangaSDXL-V2/new/main/?filename=README.md Test all my loras here:https://huggingface.co/spaces/artificialguybr/artificialguybr-demo-lora LineAni.Redmond is here! I'm grateful for the GPU time from Redmond.AI that allowed me to make this LORA! This is a Manga Lineart LORA fine-tuned on SD XL 1.0. The LORA has a high capacity to generate Manga Lineart styles i It's a versatile LORA. You can use detailed, minimalist, colorful, black and white as tag to control the results. The tag for the model:LineAniAF LORA is not perfect and sometimes needs more than one gen to create good images. That's my first Anime LORA. Please be patient <3 This is inspired in a good LORA for SD 1.5! I really hope you like the LORA and use it. If you like the model and think it's worth it, you can make a donation to my Patreon or Ko-fi. Follow me in my twitter to know before all about new models: https://twitter.com/artificialguybr/
TheLastBen/Pikachu_SDXL
TheLastBen
2023-08-29T10:35:30Z
412
4
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2023-08-12T15:22:27Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion - lora - diffusers base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: pikachu widget: - text: pikachu --- ### Pikachu #### SDXL LoRA by TheLastBen #### Prompts to start with : closeup on pikachu on a pirate ship, cinematic, screencap, high quality, light rays, sunrays, pov, ships, 1800s closeup on fluffy pikachu wearing a hoodie in a street in london, cinematic, screencap, high quality --- Trained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer. ComfyUI seems to give better results than A1111, but that's just me. #### Sample pictures: !["" 0](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(44).webp) !["" 1](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(60).webp) !["" 2](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(41).webp) !["" 3](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(42).webp) !["" 4](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(43).webp) !["" 5](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(1).webp) !["" 6](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(46).webp) !["" 7](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(47).webp) !["" 8](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(48).webp) !["" 9](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(49).webp) !["" 10](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(50).webp) !["" 11](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(51).webp) !["" 12](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(53).webp) !["" 13](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(54).webp) !["" 14](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(55).webp) !["" 15](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(57).webp) !["" 16](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(59).webp) !["" 17](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(40).webp) !["" 18](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(61).webp) !["" 19](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(69).webp) !["" 20](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(70).webp) !["" 21](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(71).webp) !["" 22](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(72).webp) !["" 23](https://huggingface.co/TheLastBen/Pikachu_SDXL/resolve/main/images/res%20(73).webp)
Yntec/HassanBlend12
Yntec
2023-09-14T19:16:41Z
412
2
diffusers
[ "diffusers", "safetensors", "General", "Photorealistic", "Hassan", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "en", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-09-01T04:45:37Z
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image language: - en tags: - General - Photorealistic - Hassan - stable-diffusion - stable-diffusion-diffusers - text-to-image inference: true --- # Hassan Blend 1.2 A model better at anything than Hassan Blend 1.5.1.2, except photorealism. It has the MoistMixV2 VAE baked in. Sample and prompt: ![Sample](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/DfLHse6EgJk0VpzTUkzj0.png) concept art of CUTE girl in a pixel, chibi character, DETAILED EYES, key visual, summer day, magazine ad, 1940, iconic, highly detailed, digital painting, artstation, concept art, sharp focus, in harmony with nature, streamlined, hyperrealism by makoto shinkai and akihiko yoshida and wlop Original page: https://civitai.com/models/1173?modelVersionId=4635
TheBloke/guanaco-7B-GGUF
TheBloke
2023-09-27T12:52:50Z
412
0
transformers
[ "transformers", "gguf", "llama", "base_model:timdettmers/guanaco-7b", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-19T23:46:35Z
--- license: other model_name: Guanaco 7B base_model: timdettmers/guanaco-7b inference: false model_creator: Tim Dettmers model_type: llama prompt_template: '### Human: {prompt} ### Assistant: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Guanaco 7B - GGUF - Model creator: [Tim Dettmers](https://huggingface.co/timdettmers) - Original model: [Guanaco 7B](https://huggingface.co/timdettmers/guanaco-7b) <!-- description start --> ## Description This repo contains GGUF format model files for [Tim Dettmers' Guanaco 7B](https://huggingface.co/timdettmers/guanaco-7b). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/guanaco-7B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/guanaco-7B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GGUF) * [Tim Dettmers's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/guanaco-7B-HF) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Guanaco ``` ### Human: {prompt} ### Assistant: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [Guanaco-7B.Q2_K.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [Guanaco-7B.Q3_K_S.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [Guanaco-7B.Q3_K_M.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [Guanaco-7B.Q3_K_L.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [Guanaco-7B.Q4_0.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [Guanaco-7B.Q4_K_S.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [Guanaco-7B.Q4_K_M.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [Guanaco-7B.Q5_0.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [Guanaco-7B.Q5_K_S.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [Guanaco-7B.Q5_K_M.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [Guanaco-7B.Q6_K.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [Guanaco-7B.Q8_0.gguf](https://huggingface.co/TheBloke/guanaco-7B-GGUF/blob/main/Guanaco-7B.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/guanaco-7B-GGUF and below it, a specific filename to download, such as: Guanaco-7B.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/guanaco-7B-GGUF Guanaco-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/guanaco-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/guanaco-7B-GGUF Guanaco-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m Guanaco-7B.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Human: {prompt}\n### Assistant:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/guanaco-7B-GGUF", model_file="Guanaco-7B.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Tim Dettmers' Guanaco 7B <!-- header start --> <div style="width: 100%;"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <!-- header end --> # Tim Dettmers' Guanaco 7B fp16 HF These files are fp16 HF model files for [Tim Dettmers' Guanaco 7B](https://huggingface.co/timdettmers/guanaco-7b). It is the result of merging the LoRA then saving in HF fp16 format. ## Other repositories available * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/guanaco-7B-GPTQ) * [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-7B-GGML) * [Merged, unquantised fp16 model in HF format](https://huggingface.co/TheBloke/guanaco-7B-HF) <!-- footer start --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD) ## Thanks, and how to contribute. Thanks to the [chirper.ai](https://chirper.ai) team! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman. Thank you to all my generous patrons and donaters! <!-- footer end --> # Original model card Not provided by original model creator. <!-- original-model-card end -->
TheBloke/MetaMath-7B-V1.0-GGUF
TheBloke
2023-09-27T12:54:33Z
412
4
transformers
[ "transformers", "gguf", "llama", "base_model:meta-math/MetaMath-7B-V1.0", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-09-24T03:03:17Z
--- license: llama2 model_name: MetaMath 7B V1.0 base_model: meta-math/MetaMath-7B-V1.0 inference: false model_creator: MetaMath model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let''s think step by step. ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # MetaMath 7B V1.0 - GGUF - Model creator: [MetaMath](https://huggingface.co/meta-math) - Original model: [MetaMath 7B V1.0](https://huggingface.co/meta-math/MetaMath-7B-V1.0) <!-- description start --> ## Description This repo contains GGUF format model files for [MetaMath's MetaMath 7B V1.0](https://huggingface.co/meta-math/MetaMath-7B-V1.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF) * [MetaMath's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-math/MetaMath-7B-V1.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca-CoT ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: Let's think step by step. ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [metamath-7b-v1.0.Q2_K.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes | | [metamath-7b-v1.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss | | [metamath-7b-v1.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss | | [metamath-7b-v1.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss | | [metamath-7b-v1.0.Q4_0.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [metamath-7b-v1.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss | | [metamath-7b-v1.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended | | [metamath-7b-v1.0.Q5_0.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [metamath-7b-v1.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended | | [metamath-7b-v1.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended | | [metamath-7b-v1.0.Q6_K.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss | | [metamath-7b-v1.0.Q8_0.gguf](https://huggingface.co/TheBloke/MetaMath-7B-V1.0-GGUF/blob/main/metamath-7b-v1.0.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/MetaMath-7B-V1.0-GGUF and below it, a specific filename to download, such as: metamath-7b-v1.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/MetaMath-7B-V1.0-GGUF metamath-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/MetaMath-7B-V1.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/MetaMath-7B-V1.0-GGUF metamath-7b-v1.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m metamath-7b-v1.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n\n### Instruction:\n{prompt}\n\n\n### Response: Let's think step by step." ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/MetaMath-7B-V1.0-GGUF", model_file="metamath-7b-v1.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: MetaMath's MetaMath 7B V1.0 No original model card was available. <!-- original-model-card end -->
TheBloke/airoboros-l2-13B-3.0-GGUF
TheBloke
2023-10-05T22:47:05Z
412
7
transformers
[ "transformers", "gguf", "llama", "dataset:jondurbin/airoboros-3.0", "base_model:jondurbin/airoboros-l2-13b-3.0", "license:llama2", "text-generation-inference", "region:us" ]
null
2023-10-05T22:38:05Z
--- base_model: jondurbin/airoboros-l2-13b-3.0 datasets: - jondurbin/airoboros-3.0 inference: false license: llama2 model_creator: Jon Durbin model_name: Airoboros L2 13B 3.0 model_type: llama prompt_template: '[INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt} [/INST] ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Airoboros L2 13B 3.0 - GGUF - Model creator: [Jon Durbin](https://huggingface.co/jondurbin) - Original model: [Airoboros L2 13B 3.0](https://huggingface.co/jondurbin/airoboros-l2-13b-3.0) <!-- description start --> ## Description This repo contains GGUF format model files for [Jon Durbin's Airoboros L2 13B 3.0](https://huggingface.co/jondurbin/airoboros-l2-13b-3.0). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF) * [Jon Durbin's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-l2-13b-3.0) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Llama-2-Uncensored ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt} [/INST] ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [airoboros-l2-13b-3.0.Q2_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [airoboros-l2-13b-3.0.Q3_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [airoboros-l2-13b-3.0.Q3_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [airoboros-l2-13b-3.0.Q3_K_L.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [airoboros-l2-13b-3.0.Q4_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [airoboros-l2-13b-3.0.Q4_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [airoboros-l2-13b-3.0.Q4_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [airoboros-l2-13b-3.0.Q5_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [airoboros-l2-13b-3.0.Q5_K_S.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [airoboros-l2-13b-3.0.Q5_K_M.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [airoboros-l2-13b-3.0.Q6_K.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [airoboros-l2-13b-3.0.Q8_0.gguf](https://huggingface.co/TheBloke/airoboros-l2-13B-3.0-GGUF/blob/main/airoboros-l2-13b-3.0.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/airoboros-l2-13B-3.0-GGUF and below it, a specific filename to download, such as: airoboros-l2-13b-3.0.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/airoboros-l2-13B-3.0-GGUF airoboros-l2-13b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/airoboros-l2-13B-3.0-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/airoboros-l2-13B-3.0-GGUF airoboros-l2-13b-3.0.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m airoboros-l2-13b-3.0.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "[INST] <<SYS>>\nYou are a help, unbiased, uncensored assistant.\n<</SYS>\n\n{prompt} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/airoboros-l2-13B-3.0-GGUF", model_file="airoboros-l2-13b-3.0.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Jon Durbin's Airoboros L2 13B 3.0 ### Overview Another experimental model, using mostly sythetic data generated by [airoboros](https://github.com/jondurbin/airoboros) #### IMPORTANT NOTE - llama-2 chat format!!! The airobors 3.0 models have been updated to use llama-2 chat format, rather than vicuna style user/assistant! This is a breaking change, although most inference systems support llama-2 chat templating. See prompt format below for all of the details. #### Highlights A model built on [airoboros-3.0 dataset](https://hf.co/datasets/jondurbin/airoboros-3.0), which is a continuation of [airoboros-2.2.1 dataset](https://hf.co/datasets/jondurbin/airoboros-2.2.1) with the following extras: * MathJSON - this is not yet part of the code directly, but can be easily parsed and executed, as a way to perform deterministic calculations. LLMs suck at math, so this does problem formulation and let's the work be done by traditional operations. * Human-generated RP multi-turn conversation - provided by an anon, this is a human created RP dataset, meant to assist with multi-turn coherency and chat experience. This is a fairly general purpose model, but focuses heavily on instruction following, rather than casual chat/roleplay. Huge thank you to the folks over at [a16z](https://a16z.com/) for sponsoring the costs associated with building models and associated tools! ### Prompt format The prompt format is llama-2 chat. ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt} [/INST] ``` For multi-turn, the prompt format is as follows: ``` [INST] <<SYS>> You are a help, unbiased, uncensored assistant. <</SYS> {prompt 0} [/INST] {response 0} </s><s>[INST] {prompt 1} [/INST] {response 1} </s><s>...[INST] {prompt N} [/INST] ``` Here is a simple python script that can help generate the prompt: ```python def get_chat_prompt(message, chat_history, system_prompt): texts = [f'[INST] <<SYS>>\n{system_prompt}\n<</SYS>>\n\n'] do_strip = False for user_input, response in chat_history: user_input = user_input.strip() if do_strip else user_input do_strip = True texts.append(f'{user_input} [/INST] {response.strip()} </s><s>[INST] ') message = message.strip() if do_strip else message texts.append(f'{message} [/INST] ') return ''.join(texts) def get_prompt(item): """ `item` is expected to be in chatml format, e.g. { "id": 1, "conversations": [ { "role": "system", "content": "You are a helpful, unbiased, uncensored assistant." }, { "role": "user", "content": "Hi!" }, { "role": "assistant", "content": "Hello, how are you today?" }, { "role": "user", "content": "Just peachy, and you?" } ] } """ system_prompt = share_gpt_item['conversations'][0]['value'] offset = 1 if share_gpt_item['conversations'][0]['role'] != 'system': system_prompt = 'You are an unbiased, uncensored, helpful assistant.' offset = 0 chat_history = [ (share_gpt_item['conversations'][idx]['content'], share_gpt_item['conversations'][idx + 1]['content']) for idx in range(offset, len(item['conversations']) - 1, 2) ] message = item['conversations'][-1]['content'] return get_chat_prompt(message, chat_history, system_prompt) ``` ### Fine tuning info https://wandb.ai/jondurbin/airoboros-l2-13b-3.0/runs/97utxyk9?workspace=user-jondurbin ### Helpful usage tips #### MathJSON Ask mathematical questions, prefixed with "Create a MathJSON solution to the following:", and you should get output with a formula that can be executed by https://cortexjs.io/compute-engine/ I also made a (really quite hacky, incomplete, semi-functional) python version that works for most expressions, see [mathjson.py](https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py) __Use a very low temperature!__ Example: ``` [INST] <<SYS>> You are an assistant with vast knowledge in all things math. <</SYS> Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST] ``` Output: ``` The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1. Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr². Solution as MathJSON: <mathjson> [ "Multiply", "Pi", [ "Power", 17.2456, 2 ] ] </mathjson> ``` You can then validate the JSON between `<mathjson>` and `</mathjson>`, then pass the parsed JSON to compute-engine JS or the `evaluate` function in mathjson.py to calculate the response. #### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT [key0: value0] [key1: value1] ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. __Use a very low temperature!__ Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` #### Summarization 500 samples have been included from [this dataset](https://huggingface.co/datasets/mattpscott/airoboros-summarization), using the same format as contextual question answering, for example: ``` BEGININPUT {text to summarize} ENDINPUT BEGININSTRUCTION Summarize the input in around 130 words. ENDINSTRUCTION ``` #### Getting longer responses You can use a few techniques to get longer responses. Detailed prompts, with explicit instruction for word count: ``` Please compose a narrative set in the heart of an ancient library, steeped in the scent of old parchment and ink. The protagonist should be a young scholar who is dedicated to studying the art of storytelling and its evolution throughout history. In her pursuit of knowledge, she stumbles upon a forgotten tome that seems to possess an unusual aura. This book has the ability to bring stories to life, literally manifesting characters and scenarios from within its pages into reality. The main character must navigate through various epochs of storytelling - from oral traditions of tribal societies, through medieval minstrels' tales, to modern-day digital narratives - as they come alive around her. Each era presents its unique challenges and lessons about the power and impact of stories on human civilization. One such character could be a sentient quill pen, who was once used by renowned authors of yesteryears and now holds their wisdom and experiences. It becomes her mentor, guiding her through this journey with witty remarks and insightful commentary. Ensure that your tale encapsulates the thrill of adventure, the beauty of learning, and the profound connection between humans and their stories. All characters involved should be non-human entities. Feel free to explore creative liberties but maintain the mentioned elements. Your response should be approximately 2300 words. ``` Or, a simpler example: ``` Please create a long, detailed story about a dragon in an old growth forest who, for some reason, begins speaking the words of the source code of linux. ``` There are a few examples of next chapter completion as well, e.g.: ``` Write the next chapter of a historical fiction novel set in Paris during the 20th century. Here's a summary of the previous chapter: In the vibrant city of Paris, amid the tumultuous changes of the 20th century, our protagonist Margot, an aspiring fashion designer, has just secured an apprenticeship at a prestigious couture house. She meets Lucien, a charming journalist who covers the fashion industry. Together they navigate the ever-changing world of fashion and society, uncovering secrets that reveal the intricate links between style, politics, and culture. As the chapter concludes, they decide to delve deeper into the hidden corners of the fashion world to unravel its mysteries. Requirements for the next chapter: 1. Character Development of Margot and Lucien: - Margot's Evolution: Unfold more about Margot's past, her dreams of revolutionizing fashion, and her struggle to establish herself in a male-dominated industry. Illustrate her growing expertise, innovative ideas, and increasing dependence on Lucien. - Lucien's Complexity: Introduce uncertainties surrounding Lucien's background and real motives. Increase suspense by suggesting undisclosed information he possesses, while also highlighting his wit and perceptiveness. 2. Exploration of Paris and the Couture House: - Paris: Elaborate their journey through the bustling streets of Paris, including encounters with iconic figures, social unrest, and relics from different eras of French history. - The Couture House: Expand on the grandeur of the couture house they work in, filled with artistic masterpieces, intense competition, and cryptic notes hinting at a scandalous past. 3. Emergence of the Subplot: The Lost Collection: - Discovery: Have Margot and Lucien stumble upon a secret vault containing a lost collection designed before World War II, raising new questions about the previous owner and the influence of war on fashion. - Revelation: Capture their shock as they realize the designs were plagiarized, the potential repercussions, and the opportunities it presents for Margot's career. - Twist: End with a twist that suggests there are other stolen collections across Paris, setting up their new mission. Your response should be approximately 650 words. ``` #### Coding You can ask for fairly complex coding instructions with multiple criteria, e.g.: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or inline criteria: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can also optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` #### Agent/function calling The dataset includes many examples of function/args generation based on input criteria. This is somewhat similar to the OpenAI function calling, but the output is either JSON or YAML. Example prompt: ``` As an AI assistant, please select the most suitable function and parameters from the list of available functions below, based on the user's input. Provide your response in JSON format. Input: I want to know how many times 'Python' is mentioned in my text file. Available functions: file_analytics: description: This tool performs various operations on a text file. params: action: The operation we want to perform on the data, such as "count_occurrences", "find_line", etc. filters: keyword: The word or phrase we want to search for. ``` Response: ```json { "function": "file_analytics", "params": { "action": "count_occurrences", "filters": { "keyword": "Python" } } } ``` #### Chain-of-thought You can ask for several possible responses to a given problem, with a ranking and final answer selection. Example prompt: ``` A group of people decide to play a round-robin tournament where every player plays every other player exactly once. If a total of 45 games were played, how many players participated in the tournament? Offer several potential answers, rank them according to mathematical logic, and pick the most feasible one. ``` Example response: ``` Solution 1: In a round-robin tournament, each player plays every other player exactly once. This means that the total number of games played is equal to the combination of all players taken two at a time (since a game involves two players). The formula for combinations is nC2 = n(n-1)/2, where n is the total number of players. We know that nC2 equals 45 (the total number of games), so we can set up the equation as follows: n(n-1)/2 = 45 Solving this equation gives us n=10. Final answer: There were 10 players in the tournament. Solution 2: Another way to approach this problem is by using the concept of triangular numbers. Triangular numbers are a sequence of numbers where each number can be represented as a triangle with dots. The nth triangular number is given by the sum of the first n natural numbers. If we consider each game as a dot and each player as a side of the triangle, then the total number of games (45) would correspond to the 9th triangular number because 1+2+3+4+5+6+7+8+9=45. However, since each side of the triangle represents a player, and there's one more player than sides in our model (because the last player has no one left to play against), we need to add one more to get the total number of players. Final answer: There were 10 players in the tournament. Solution 3: We could also use trial and error to solve this problem. Starting from a small number of players, we calculate how many games would be played in a round-robin tournament until we reach 45 games. For example, with 2 players, only 1 game is played; with 3 players, 3 games are played; with 4 players, 6 games are played, and so on. Continuing this process, we find that with 10 players, 45 games are played. Final answer: There were 10 players in the tournament. Ranking of solutions from best to worst: 1. Solution 1: This solution uses direct mathematical reasoning based on the properties of a round-robin tournament and solves an equation to find the answer. 2. Solution 2: This solution uses the concept of triangular numbers which requires additional steps and knowledge beyond the immediate problem context, but still arrives at the correct answer. 3. Solution 3: This solution relies on trial and error, which is less efficient and reliable than the other methods, but it eventually gets to the right answer. Best and final answer: There were 10 players in the tournament. ``` #### reWOO style execution planning The model now supports execution planning for complex instructions that would require making use of several tools. The output is just the plan, you must implement a mechanism to parse the output and actually call the functions! Example prompt: ``` Please construct a systematic plan to generate an optimal response to the user instruction, utilizing a set of provided tools. Each plan will correspond to an evidence value, which will be the output of one of the available functions given an input string that could be the user's question, one or more prior evidence values, or a combination of both. Here are the tools available to be called: DuckDuckGo[input]: Tool that enables the user to search for information using the DuckDuckGo search engine. This tool is helpful in locating a suitable list of sites that may contain the answer to the user's query. It does not directly answer the question but finds a list of sites that might have the answer. TextScraper[input]: Load one or more websites from the input string containing newline delimited links, where input is one or more links, and produces plain text output containing the content of the links. HyperlinkExtractor[input]: Extract links from plain text and produces a plain text, newline delimited response of links. KnowledgeModel[input]: Question answering language model, particularly useful in answering questions based on an input passage of text. The input must be a text question that references an :evidence[n]: variable, e.g. What color is the cat, given :evidence1:? The output should be in format: Plan: [first action to take based on input question] :evidence0: = [function to call with input parameter] Plan: [next action to take, based on result of :evidence0:] :evidence1: = [next function to call and input parameter, which may include reference to previous evidence, e.g. "Given context :evidence0"] ... Answer: [:evidence[n]: containing the final answer.] Question: Who won the Nobel Prize in Literature in 2020 and what is their most famous work? ``` Response: ``` Plan: Begin by conducting a web search to find out who won the Nobel Prize in Literature in 2020. :evidence0: = DuckDuckGo[Nobel Prize in Literature 2020 winner] Plan: Utilize the KnowledgeModel to interpret the search results and find the name of the laureate. :evidence1: = KnowledgeModel[Who won the Nobel Prize in Literature in 2020, given :evidence0:?] Plan: Conduct another web search to find the most famous work of the identified laureate. :evidence2: = DuckDuckGo[Most famous work of :evidence1:] Plan: Extract the relevant links from the DuckDuckGo search results for a more focused search. :evidence3: = HyperlinkExtractor[:evidence2:] Plan: Use the TextScraper tool to extract information from the relevant links. :evidence4: = TextScraper[:evidence3:] Plan: Finally, utilize the KnowledgeModel to identify and summarize the most famous work of the laureate from the extracted information. :evidence5: = KnowledgeModel[What is the most famous work of :evidence1:, given :evidence4:?] Answer: :evidence5: ``` For this to be useful, you'd have to parse the output plan text, and implement/call each of the functions. This is just pseudo-code, completely untested off the top of my head, and obviously would requiring full implementation + hardening: ```python import re import requests def inject_context(input_text, **context): for ref in set(re.findall(r"(:evidence[0-9]+:)", input_text, re.I)): input_text = input_text.replace(ref, context.get(ref, "")) return input_text def duckduckgo(input_text, **context): search_string = inject_context(input_text, **context) ... search via duck duck go using search_string ... return text content def link_extractor(input_text, **context): input_text = inject_context(input_text, **context) return "\n".join(list(set(re.findall(r"(https?://[^\s]+?\.?)", input_text, re.I)))) def scrape(input_text, **context): input_text = inject_context(input_text, **context) text = [] for link in input_text.splitlines(): text.append(requests.get(link).text) return "\n".join(text) def infer(input_text, **context) prompt = inject_context(input_text, **context) ... call model with prompt, return output def parse_plan(plan): method_map = { "DuckDuckGo": duckduckgo, "HyperlinkExtractor": link_extractor, "KnowledgeModel": infer, "TextScraper": scrape, } context = {} for line in plan.strip().splitlines(): if line.startswith("Plan:"): print(line) continue parts = re.match("^(:evidence[0-9]+:)\s*=\s*([^\[]+])(\[.*\])\s$", line, re.I) if not parts: if line.startswith("Answer: "): return context.get(line.split(" ")[-1].strip(), "Answer couldn't be generated...") raise RuntimeError("bad format: " + line) context[parts.group(1)] = method_map[parts.group(2)](parts.group(3), **context) ``` ### Contribute If you're interested in new functionality, particularly a new "instructor" type to generate a specific type of training data, take a look at the dataset generation tool repo: https://github.com/jondurbin/airoboros and either make a PR or open an issue with details. To help me with the OpenAI/compute costs: - https://bmc.link/jondurbin - ETH 0xce914eAFC2fe52FdceE59565Dd92c06f776fcb11 - BTC bc1qdwuth4vlg8x37ggntlxu5cjfwgmdy5zaa7pswf ### Licence and usage restrictions The airoboros 3.0 models are built on top of multiple base models, each with their own license/restrictions. The models with `-l2` in the name have a custom Meta license: - See the [meta-license/LICENSE.txt](meta-license/LICENSE.txt) file attached for the original license provided by Meta. - See also [meta-license/USE_POLICY.md](meta-license/USE_POLICY.md) and [meta-license/Responsible-Use-Guide.pdf](meta-license/Responsible-Use-Guide.pdf), also provided by Meta. The models with `-m-` are mistral-7b (apache 2.0) The model with `-3b` uses Stability AI, which as a `cc-by-sa-4.0` license. The fine-tuning data was mostly generated by OpenAI API calls to gpt-4, via [airoboros](https://github.com/jondurbin/airoboros) The ToS for OpenAI API usage has a clause preventing the output from being used to train a model that __competes__ with OpenAI - what does *compete* actually mean here? - these small open source models will not produce output anywhere near the quality of gpt-4, or even gpt-3.5, so I can't imagine this could credibly be considered competing in the first place - if someone else uses the dataset to do the same, they wouldn't necessarily be violating the ToS because they didn't call the API, so I don't know how that works - the training data used in essentially all large language models includes a significant amount of copyrighted or otherwise non-permissive licensing in the first place - other work using the self-instruct method, e.g. the original here: https://github.com/yizhongw/self-instruct released the data and model as apache-2 I am purposingly leaving this license ambiguous (other than the fact you must comply with the Meta original license for llama-2) because I am not a lawyer and refuse to attempt to interpret all of the terms accordingly. Your best bet is probably to avoid using this commercially due to the OpenAI API usage. Either way, by using this model, you agree to completely indemnify me. <!-- original-model-card end -->
22h/cabrita_7b_pt_850000
22h
2023-11-09T13:10:27Z
412
4
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-11-09T12:25:09Z
--- license: llama2 ---
genaitraining/llama-2-7b-domain-tuned
genaitraining
2023-12-05T06:09:40Z
412
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-05T05:59:23Z
Entry not found
isjackwild/segformer-b0-finetuned-segments-skin-hair-clothing
isjackwild
2024-01-09T16:46:13Z
412
1
transformers
[ "transformers", "safetensors", "segformer", "image-segmentation", "dataset:mattmdjaga/human_parsing_dataset", "arxiv:1910.09700", "license:mit", "endpoints_compatible", "region:us" ]
image-segmentation
2024-01-09T13:33:52Z
--- license: mit datasets: - mattmdjaga/human_parsing_dataset tags: - image-segmentation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kaczmarj/colorectal-resnet34.penn
kaczmarj
2024-02-26T21:18:06Z
412
0
transformers
[ "transformers", "pytorch", "safetensors", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
null
2024-01-13T02:45:40Z
--- license: cc-by-4.0 --- # Model Card for colorectal-resnet34.penn This model was trained by Yuwei Zhang at Stony Brook University. More information coming in the future. # Use with WSInfer One can use this model via WSInfer. See [the documentation](https://wsinfer.readthedocs.io/en/latest/) for more information, like how to install and use it. ``` wsinfer run -i slides/ -o output/ -m colorectal-resnet34.penn ```
kaitchup/Qwen1.5-7B-bnb-4bit
kaitchup
2024-02-17T04:24:55Z
412
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-02-17T04:22:25Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Mr-Vicky-01/Bart-Finetuned-conversational-summarization
Mr-Vicky-01
2024-04-19T14:28:22Z
412
7
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "summarization", "en", "dataset:EdinburghNLP/xsum", "dataset:samsum", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2024-03-19T12:00:10Z
--- license: mit pipeline_tag: summarization widget: - text: >- Now, there is no doubt that one of the most important aspects of any Pixel phone is its camera. And there might be good news for all camera lovers. Rumours have suggested that the Pixel 9 could come with a telephoto lens, improving its photography capabilities even further. Google will likely continue to focus on using AI to enhance its camera performance, in order to make sure that Pixel phones remain top contenders in the world of mobile photography. - text: >- Elon Reeve Musk (/ˈiːlɒn/ EE-lon; born June 28, 1971) is a businessman and investor. He is the founder, chairman, CEO, and CTO of SpaceX; angel investor, CEO, product architect, and former chairman of Tesla, Inc.; owner, executive chairman, and CTO of X Corp.; founder of the Boring Company and xAI; co-founder of Neuralink and OpenAI; and president of the Musk Foundation. He is one of the wealthiest people in the world, with an estimated net worth of US$190 billion as of March 2024, according to the Bloomberg Billionaires Index, and $195 billion according to Forbes, primarily from his ownership stakes in Tesla and SpaceX.[5][6] A member of the wealthy South African Musk family, Elon was born in Pretoria and briefly attended the University of Pretoria before immigrating to Canada at age 18, acquiring citizenship through his Canadian-born mother. Two years later, he matriculated at Queen University at Kingston in Canada. Musk later transferred to the University of Pennsylvania, and received bachelor degrees in economics and physics. He moved to California in 1995 to attend Stanford University, but dropped out after two days and, with his brother Kimbal, co-founded online city guide software company Zip2. The startup was acquired by Compaq for $307 million in 1999, and that same year Musk co-founded X.com, a direct bank. X.com merged with Confinity in 2000 to form PayPal. In October 2002, eBay acquired PayPal for $1.5 billion, and that same year, with $100 million of the money he made, Musk founded SpaceX, a spaceflight services company. In 2004, he became an early investor in electric vehicle manufacturer Tesla Motors, Inc. (now Tesla, Inc.). He became its chairman and product architect, assuming the position of CEO in 2008. In 2006, Musk helped create SolarCity, a solar-energy company that was acquired by Tesla in 2016 and became Tesla Energy. In 2013, he proposed a hyperloop high-speed vactrain transportation system. In 2015, he co-founded OpenAI, a nonprofit artificial intelligence research company. The following year, Musk co-founded Neuralink—a neurotechnology company developing brain–computer interfaces—and the Boring Company, a tunnel construction company. In 2022, he acquired Twitter for $44 billion. He subsequently merged the company into newly created X Corp. and rebranded the service as X the following year. In March 2023, he founded xAI, an artificial intelligence company. Musk has expressed views that have made him a polarizing figure.[7] He has been criticized for making unscientific and misleading statements, including COVID-19 misinformation and antisemitic conspiracy theories.[7][8][9][10] His ownership of Twitter has been similarly controversial, being marked by the laying off of a large number of employees, an increase in hate speech and misinformation and disinformation on the website, as well as changes to Twitter Blue verification. In 2018, the U.S. Securities and Exchange Commission (SEC) sued him, alleging that he had falsely announced that he had secured funding for a private takeover of Tesla. To settle the case, Musk stepped down as the chairman of Tesla and paid a $20 million fine. - text: >- OnePlus faces the prospect of going out of store in some states in the Indian market next month. Reports this week suggest OnePlus phones will be taken off around 4,500 stores in different parts of the country from May 1, 2024 onwards. It has been pointed out that the retailer organisation taking charge of stores in the South and western parts of the country are not pleased with the company for various reasons. The South Indian Organized Retailers Association (ORA) has been quoted saying that OnePlus does not garner enough margins for its network to sell OnePlus phones and that will definitely come as a big jolt to the popular brand. Report also mentions that the ORA has sent a formal complaint to OnePlus executives earlier this week, stating that until the issues regarding OnePlus warranty and margins are not resolved, retailers will decide against selling OnePlus phones at their stores in states like Gujarat, Maharashtra, Andhra Pradesh, Telangana, Karnataka, and Tamil Nadu, that caters to a total of 4,500 stores. Having said that, the retailers looking to stop selling OnePlus phones are associated with multi-retail brands, so it is possible that small-time mobile shops could still offer phones from the brand. OnePlus has claimed to be one of the top-sellers via online channels but its focus on the offline market seems to have wavered which seems to have caused this displeasure among the retailers in the country. We have previously seen that offline retailers play a big role in the growth and demand for phones, and OnePlus will need to do everything to make sure this issue is resolved at the earliest, before it starts to impact its online sales as well. We’re still a few weeks away from the deadline, which ensures OnePlus can still get things sorted and go back to business as usual in these regions. datasets: - EdinburghNLP/xsum - samsum language: - en library_name: transformers --- # Model Description This model is based on the Facebook BART (Bidirectional and Auto-Regressive Transformers) architecture, specifically the large variant fine-tuned for text summarization tasks. BART is a sequence-to-sequence model introduced by Facebook AI, capable of handling various natural language processing tasks, including summarization. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65af937a30e33d1b60c8772b/jWIThYZRayUFKgScplHrZ.png) ## Model Details: - **Architecture**: BART Large CNN - **Pre-trained model**: BART Large - **Fine-tuned for**: Text Summarization - **Fine-tuning dataset**: [xsum](https://huggingface.co/datasets/EdinburghNLP/xsum) & [samsum](https://huggingface.co/datasets/samsum) ## Space Link: [Summarization Model](https://huggingface.co/spaces/Mr-Vicky-01/Summarization) ## How To FineTune This Model: [Github](https://github.com/Mr-Vicky-01/English-Summization) ## Usage: ### Installation: You can install the necessary libraries using pip: ```bash pip install transformers ``` ### Inferecnce provided a simple snippet of how to use this model for the task of paragraph summarization in PyTorch. ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Mr-Vicky-01/Bart-Finetuned-conversational-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("Mr-Vicky-01/Bart-Finetuned-conversational-summarization") def generate_summary(text): inputs = tokenizer([text], max_length=1024, return_tensors='pt', truncation=True) summary_ids = model.generate(inputs['input_ids'], max_new_tokens=100, do_sample=False) summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True) return summary text_to_summarize = """Now, there is no doubt that one of the most important aspects of any Pixel phone is its camera. And there might be good news for all camera lovers. Rumours have suggested that the Pixel 9 could come with a telephoto lens, improving its photography capabilities even further. Google will likely continue to focus on using AI to enhance its camera performance, in order to make sure that Pixel phones remain top contenders in the world of mobile photography.""" summary = generate_summary(text_to_summarize) print(summary) ``` ``` Google is rumoured to be about to unveil its next-generation Pixel smartphone, the Google Pixel 9,which is expected to come with a telephoto lens and an artificial intelligence (AI) system to improve its camera capabilities, as well as improve the quality of its images. ``` ### Training Parameters ```python num_train_epochs=1, warmup_steps = 500, per_device_train_batch_size=4, per_device_eval_batch_size=4, weight_decay = 0.01, gradient_accumulation_steps=16 ```
mradermacher/FNCARL-7b-GGUF
mradermacher
2024-05-06T05:19:57Z
412
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:jambroz/FNCARL-7b", "endpoints_compatible", "region:us" ]
null
2024-04-03T20:47:28Z
--- base_model: jambroz/FNCARL-7b language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/jambroz/FNCARL-7b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q2_K.gguf) | Q2_K | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ3_XS.gguf) | IQ3_XS | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q3_K_S.gguf) | Q3_K_S | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ3_M.gguf) | IQ3_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q3_K_L.gguf) | Q3_K_L | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.IQ4_XS.gguf) | IQ4_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q5_K_S.gguf) | Q5_K_S | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q5_K_M.gguf) | Q5_K_M | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q6_K.gguf) | Q6_K | 6.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/FNCARL-7b-GGUF/resolve/main/FNCARL-7b.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
MulaBR/Mula-4x160-v0.1
MulaBR
2024-06-04T11:21:46Z
412
5
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "text-generation-inference", "pt", "dataset:nicholasKluge/Pt-Corpus-Instruct", "license:apache-2.0", "model-index", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-04-10T13:53:31Z
--- language: - pt license: apache-2.0 library_name: transformers tags: - text-generation-inference datasets: - nicholasKluge/Pt-Corpus-Instruct metrics: - perplexity pipeline_tag: text-generation widget: - text: A PUCRS é uma universidade example_title: Exemplo - text: A muitos anos atrás, em uma galáxia muito distante, vivia uma raça de example_title: Exemplo - text: Em meio a um escândalo, a frente parlamentar pediu ao Senador Silva para example_title: Exemplo inference: parameters: repetition_penalty: 1.2 temperature: 0.2 top_k: 20 top_p: 0.2 max_new_tokens: 150 co2_eq_emissions: emissions: 7600 source: CodeCarbon training_type: pre-training geographical_location: Germany hardware_used: NVIDIA A100-SXM4-40GB model-index: - name: Mula-4x160-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: ENEM Challenge (No Images) type: eduagarcia/enem_challenge split: train args: num_few_shot: 3 metrics: - type: acc value: 21.34 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BLUEX (No Images) type: eduagarcia-temp/BLUEX_without_images split: train args: num_few_shot: 3 metrics: - type: acc value: 25.17 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: OAB Exams type: eduagarcia/oab_exams split: train args: num_few_shot: 3 metrics: - type: acc value: 25.06 name: accuracy source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 RTE type: assin2 split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 33.57 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Assin2 STS type: eduagarcia/portuguese_benchmark split: test args: num_few_shot: 15 metrics: - type: pearson value: 11.35 name: pearson source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: FaQuAD NLI type: ruanchaves/faquad-nli split: test args: num_few_shot: 15 metrics: - type: f1_macro value: 43.97 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HateBR Binary type: ruanchaves/hatebr split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 41.5 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: PT Hate Speech Binary type: hate_speech_portuguese split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 22.99 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: tweetSentBR type: eduagarcia/tweetsentbr_fewshot split: test args: num_few_shot: 25 metrics: - type: f1_macro value: 11.24 name: f1-macro source: url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=MulaBR/Mula-4x160-v0.1 name: Open Portuguese LLM Leaderboard --- # Mula-4x160-v0.1 <img src="./logo-no-bg.png" alt="Mula" height="200"> ## Model Summary Mula is a series of Sparse Mixture of Experts (SMoE) language models, all trained natively in Brazilian Portuguese, designed to help democratize LLMs for low-resource languages. Mula-4x160-v0.1 is one of our first experiments on pre-training a SMoE, using the [Pt-Corpus-Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) dataset. It has 4 experts per layer and activates 2 for each token. Future versions of Mula will be trained on an extensively larger Brazilian Portuguese dataset. ## Details - **Architecture:** a Sparse Mixture of Experts (Mixtral implementation) pre-trained via causal language modeling - **Size:** 407,820,288 parameters (only 237,950,976 activated parameters during runtime) - **Context length:** 2048 tokens - **Dataset:** [Pt-Corpus Instruct](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct) (6.2B tokens) - **Language:** Portuguese - **Training time**: ~ 30 hours - **Emissions:** 7.6 KgCO2eq (Germany) - **Total energy consumption:** 15 kWh ## Intended Uses The primary intended use of Mula-4x160-v0.1 is to research the challenges related to developing language models for low-resource languages. Checkpoints saved during training are intended to provide a controlled setting for performing scientific experiments. You may also further fine-tune and adapt Mula-4x160-v0.1 for deployment, as long as your use is following the Apache 2.0 license. If you decide to use pre-trained Mula-4x160-v0.1 as a basis for your fine-tuned model, please conduct your own risk and bias assessment. ## Out-of-scope Use Mula-4x160-v0.1 is not intended for deployment. It is not a product and should not be used for human-facing interactions. Mula-4x160-v0.1 models are Brazilian Portuguese language only and are not suitable for translation or generating text in other languages. Mula-4x160-v0.1 has not been fine-tuned for downstream contexts in which language models are commonly deployed. ## Basic usage Using the `pipeline`: ```python from transformers import pipeline generator = pipeline("text-generation", model="MulaBR/Mula-4x160-v0.1") completions = generator("Astronomia é a ciência", num_return_sequences=2, max_new_tokens=100) for comp in completions: print(f"🤖 {comp['generated_text']}") ``` Using the `AutoTokenizer` and `AutoModelForCausalLM`: ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch # Load model and the tokenizer tokenizer = AutoTokenizer.from_pretrained("MulaBR/Mula-4x160-v0.1", revision='main') model = AutoModelForCausalLM.from_pretrained("MulaBR/Mula-4x160-v0.1", revision='main') # Pass the model to your device device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.eval() model.to(device) # Tokenize the inputs and pass them to the device inputs = tokenizer("Astronomia é a ciência", return_tensors="pt").to(device) # Generate some text completions = model.generate(**inputs, num_return_sequences=2, max_new_tokens=100) # Print the generated text for i, completion in enumerate(completions): print(f'🤖 {tokenizer.decode(completion)}') ``` ## Limitations Like almost all other language models trained on large text datasets scraped from the web, Mula-4x160-v0.1 exhibits behavior that does not make them an out-of-the-box solution to many real-world applications, especially those requiring factual, reliable, nontoxic text generation. Our models are all subject to the following: - **Hallucinations:** This model can produce content that can be mistaken for truth but is, in fact, misleading or entirely false, i.e., hallucination. - **Biases and Toxicity:** This model inherits the social and historical stereotypes from the data used to train it. Given these biases, the model can produce toxic content, i.e., harmful, offensive, or detrimental to individuals, groups, or communities. - **Unreliable Code:** The model may produce incorrect code snippets and statements. These code generations should not be treated as suggestions or accurate solutions. - **Language Limitations:** The model is primarily designed to understand standard Brazilian Portuguese. Other languages might challenge its comprehension, leading to potential misinterpretations or errors in response. - **Repetition and Verbosity:** The model may get stuck on repetition loops (especially if the repetition penalty during generations is set to a meager value) or produce verbose responses unrelated to the prompt it was given. Hence, even though our models are released with a permissive license, we urge users to perform their risk analysis on these models if intending to use them for real-world applications and also have humans moderating the outputs of these models in applications where they will interact with an audience, guaranteeing users are always aware they are interacting with a language model. ## Benchmarks Evaluations on benchmarks were performed using the [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) (by [EleutherAI](https://www.eleuther.ai/)). [Laiviet](https://github.com/laiviet/lm-evaluation-harness) translated the tasks from the LM-Evaluation-Harness we used. | | **ARC** | **HellaSwag** | **MMLU** | **TruthfulQA** | |----------------------|-----------|---------------|-----------|----------------| | **Mula-4x160-v0.1** | 27.09 | 31.41 | 28.15 | 39.81 | | **Mula-8x160-v0.1** | 26.15 | 33.06 | 28.14 | 41.69 | Evaluations on Brazilian Portuguese benchmarks were performed using a [Portuguese implementation of the EleutherAI LM Evaluation Harness](https://github.com/eduagarcia/lm-evaluation-harness-pt) (created by [Eduardo Garcia](https://github.com/eduagarcia/lm-evaluation-harness-pt)). | | **ASSIN2 RTE** | **ASSIN2 STS** | **BLUEX** | **ENEM** | **FAQUAD NLI** | **HateBR** | **PT Hate Speech** | **OAB Exams** | **TweetSentBR** | |-----------------------|----------------|----------------|-----------|----------|----------------|------------|--------------------|---------------|-----------------| | **Mula-4x160-v0.1** | 33.57 | 11.35 | 25.17 | 21.34 | 43.97 | 41.50 | 22.99 | 25.06 | 11.24 | | **Mula-8x160-v0.1** | 33.51 | 0 | 20.17 | 19.94 | 43.97 | 33.33 | 42.69 | 24.37 | 24.60 | ## Cite as 🤗 ```latex @misc{mula2024BR, title = {Mula: a Sparse Mixture of Experts Language Model trained in Brazilian Portuguese}, author = {Corr{\^e}a, Nicholas Kluge and Sen, Aniket and Falk, Sophia and Fatimah, Shiza}, howpublished = {\url{https://huggingface.co/MulaBR}}, year={2024} } ``` ## License Mula-4x160-v0.1 is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details. ## Acknowledgements The authors gratefully acknowledge the granted access to the [Marvin cluster](https://www.hpc.uni-bonn.de/en/systems/marvin) hosted by the [University of Bonn](https://www.uni-bonn.de/en) along with the support provided by its High Performance Computing & Analytics Lab.
mradermacher/MysticNoromaidx-i1-GGUF
mradermacher
2024-05-06T04:59:36Z
412
0
transformers
[ "transformers", "gguf", "en", "base_model:Fredithefish/MysticNoromaidx", "endpoints_compatible", "region:us" ]
null
2024-04-12T19:23:42Z
--- base_model: Fredithefish/MysticNoromaidx language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/Fredithefish/MysticNoromaidx <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MysticNoromaidx-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/MysticNoromaidx-i1-GGUF/resolve/main/MysticNoromaidx.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/Experiment26-7B-GGUF
mradermacher
2024-05-06T04:59:13Z
412
0
transformers
[ "transformers", "gguf", "chat", "en", "base_model:yam-peleg/Experiment26-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T23:47:59Z
--- base_model: yam-peleg/Experiment26-7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/yam-peleg/Experiment26-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Experiment26-7B-GGUF/resolve/main/Experiment26-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
RichardErkhov/akhooli_-_gpt2-small-arabic-gguf
RichardErkhov
2024-04-17T10:52:41Z
412
0
null
[ "gguf", "region:us" ]
null
2024-04-17T10:49:02Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-small-arabic - GGUF - Model creator: https://huggingface.co/akhooli/ - Original model: https://huggingface.co/akhooli/gpt2-small-arabic/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-small-arabic.Q2_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q2_K.gguf) | Q2_K | 0.08GB | | [gpt2-small-arabic.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [gpt2-small-arabic.IQ3_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ3_S.gguf) | IQ3_S | 0.08GB | | [gpt2-small-arabic.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [gpt2-small-arabic.IQ3_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ3_M.gguf) | IQ3_M | 0.09GB | | [gpt2-small-arabic.Q3_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K.gguf) | Q3_K | 0.09GB | | [gpt2-small-arabic.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [gpt2-small-arabic.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K_L.gguf) | Q3_K_L | 0.1GB | | [gpt2-small-arabic.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ4_XS.gguf) | IQ4_XS | 0.1GB | | [gpt2-small-arabic.Q4_0.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_0.gguf) | Q4_0 | 0.1GB | | [gpt2-small-arabic.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [gpt2-small-arabic.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [gpt2-small-arabic.Q4_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_K.gguf) | Q4_K | 0.11GB | | [gpt2-small-arabic.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_K_M.gguf) | Q4_K_M | 0.11GB | | [gpt2-small-arabic.Q4_1.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_1.gguf) | Q4_1 | 0.11GB | | [gpt2-small-arabic.Q5_0.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_0.gguf) | Q5_0 | 0.11GB | | [gpt2-small-arabic.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [gpt2-small-arabic.Q5_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_K.gguf) | Q5_K | 0.12GB | | [gpt2-small-arabic.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [gpt2-small-arabic.Q5_1.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_1.gguf) | Q5_1 | 0.12GB | | [gpt2-small-arabic.Q6_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q6_K.gguf) | Q6_K | 0.13GB | Original model description: --- language: "ar" datasets: - Arabic Wikipedia metrics: - none --- # GPT2-Small-Arabic ## Model description GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). ## Intended uses & limitations #### How to use An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing). Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. ## Training data This pretrained model used the Arabic Wikipedia dump (around 900 MB). ## Training procedure Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU. ## Eval results Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info ```bibtex @inproceedings{Abed Khooli, year={2020} } ```
asiansoul/SmartLlama-3-Ko-8B-256k-PoSE-GGUF
asiansoul
2024-05-03T00:10:06Z
412
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "arxiv:2309.10400", "base_model:winglian/llama-3-8b-256k-PoSE", "base_model:Locutusque/Llama-3-Orca-1.0-8B", "base_model:NousResearch/Meta-Llama-3-8B", "base_model:abacusai/Llama-3-Smaug-8B", "base_model:beomi/Llama-3-Open-Ko-8B-Instruct-preview", "base_model:NousResearch/Meta-Llama-3-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2024-05-01T14:22:53Z
--- base_model: - winglian/llama-3-8b-256k-PoSE - Locutusque/Llama-3-Orca-1.0-8B - NousResearch/Meta-Llama-3-8B - abacusai/Llama-3-Smaug-8B - beomi/Llama-3-Open-Ko-8B-Instruct-preview - NousResearch/Meta-Llama-3-8B-Instruct library_name: transformers tags: - mergekit - merge --- # 🇰🇷 SmartLlama-3-Ko-8B-256k-PoSE <a href="https://ibb.co/rs8DhB8"><img src="https://i.ibb.co/8cv1wyv/Smart-Llama-3-Ko-8-B-256k-Po-SE.png" alt="Smart-Llama-3-Ko-8-B-256k-Po-SE" border="0"></a> SmartLlama-3-Ko-8B-256k-[PoSE](https://huggingface.co/papers/2309.10400) is an advanced AI model that integrates the capabilities of several advanced language models, designed to excel in a variety of tasks ranging from technical problem-solving to multilingual communication, especially with its extended context length of 256k tokens. This model is uniquely positioned to handle larger and more complex datasets and longer conversational contexts, making it ideal for deep learning applications requiring extensive text understanding and generation. ## 📕 Merge Details ### Component Models and Contributions - **NousResearch/Meta-Llama-3-8B and Meta-Llama-3-8B-Instruct**: These models provide a solid foundation for general language understanding and instruction-following capabilities. - **winglian/llama-3-8b-256k-PoSE**: Utilizes Positional Skip-wise Training (PoSE) to extend Llama's context length to 256k, significantly improving the model's ability to handle extensive texts and complex instructions, enhancing performance in tasks requiring long-duration focus and memory. - **Locutusque/Llama-3-Orca-1.0-8B**: Specializes in mathematical, coding, and writing tasks, bringing precision to technical and creative outputs. - **abacusai/Llama-3-Smaug-8B**: Improves the model's performance in real-world, multi-turn conversations, which is crucial for applications in customer service and interactive learning environments. - **beomi/Llama-3-Open-Ko-8B-Instruct-preview**: Focuses on improving understanding and generation of Korean, offering robust solutions for bilingual or multilingual applications targeting Korean-speaking audiences. ## 🖼️ Key Features - **Extended Context Length**: Utilizes the PoSE (Positional Encoding) technique to handle up to 256,000 tokens, making it ideal for analyzing large volumes of text such as books, comprehensive reports, and lengthy communications. - **Multilingual Support**: While primarily focused on Korean language processing, this model also provides robust support for multiple languages, enhancing its utility in global applications. - **Advanced Integration of Models**: Combines strengths from various models including NousResearch's Meta-Llama-3-8B, the instruction-following capabilities of Llama-3-Open-Ko-8B-Instruct-preview, and specialized capabilities from models like Llama-3-Smaug-8B for nuanced dialogues and Orca-1.0-8B for technical precision. ## 🎨 Models Merged The following models were included in the merge: - **winglian/llama-3-8b-256k-PoSE**: [Extends the context handling capability](https://huggingface.co/winglian/llama-3-8b-256k-PoSE). This model uses Positional Skip-wise Training (PoSE) to enhance the handling of extended context lengths, up to 256k tokens. - **Locutusque/Llama-3-Orca-1.0-8B**: [Enhances abilities in handling technical content](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B). Specialized in computational, scientific, and technical tasks, improving the model's ability to process complex academic and technical language. - **abacusai/Llama-3-Smaug-8B**: [Improves multi-turn conversational abilities](https://huggingface.co/abacusai/Llama-3-Smaug-8B). Boosts performance in engaging in lengthy, context-aware dialogues necessary for effective customer service and interactive learning. - **beomi/Llama-3-Open-Ko-8B-Instruct-preview**: [Provides enhanced capabilities for Korean language processing](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview). This model is fine-tuned to understand and generate Korean, making it ideal for applications targeting Korean-speaking users. - **NousResearch/Meta-Llama-3-8B-Instruct**: [Offers advanced instruction-following capabilities](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct). It is optimized to follow complex instructions, enhancing the model's utility in task-oriented dialogues and applications that require a high level of understanding and execution of user commands. ### 🖋️ Merge Method - **DARE TIES**: This method was employed to ensure that each component model contributes effectively to the merged model, maintaining a high level of performance across diverse applications. NousResearch/Meta-Llama-3-8B served as the base model for this integration, providing a stable and powerful framework for the other models to build upon. ## 💻 Ollama ``` ollama create smartllama-3-Ko-8b-256k-pose -f ./Modelfile_Q5_K_M ``` [Modelfile_Q5_K_M] ``` FROM smartllama-3-ko-8b-256k-pose-Q5_K_M.gguf TEMPLATE """ {{- if .System }} system <s>{{ .System }}</s> {{- end }} user <s>Human: {{ .Prompt }}</s> assistant <s>Assistant: """ SYSTEM """ 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답하자. 길이에 상관없이 모든 대답은 한국어(Korean)으로 대답해줘. """ PARAMETER temperature 0.7 PARAMETER num_predict 3000 PARAMETER num_ctx 256000 PARAMETER stop "<s>" PARAMETER stop "</s>" ``` ## 💻 Ollama Python Summarizing Normal Test Code install all of these libraries ``` pip install requests beautifulsoup4 PyPDF2 langchain-community langchain ``` pose_test.py ``` import sys import os import requests from bs4 import BeautifulSoup import PyPDF2 from langchain_community.chat_models import ChatOllama from langchain.schema import AIMessage, HumanMessage, SystemMessage def clean_output(text): text = text.replace("</s>", "").strip() return text def invoke_model(text): messages = [ SystemMessage(content='You are an expert copywriter with expertise in summarizing documents.'), HumanMessage(content=f'Please provide a short and concise summary of the following text:\nTEXT: {text}') ] try: llm = ChatOllama(model="pose:latest") summary_output = llm.invoke(messages) if isinstance(summary_output, AIMessage): cleaned_content = clean_output(summary_output.content) return cleaned_content else: return "Unexpected data type for model output." except Exception as e: print(f"An error occurred while processing the model output: {str(e)}") return None def fetch_text_from_url(url): try: response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.text, 'html.parser') content = soup.find('div', {'id': 'bodyContent'}) paragraphs = content.find_all('p') text_content = ' '.join(p.text for p in paragraphs) return text_content except requests.RequestException as e: print(f"Failed to fetch data from URL: {str(e)}") return None def read_text_file(file_path): with open(file_path, "r", encoding="utf-8") as file: return file.read() def read_pdf(file_path): with open(file_path, "rb") as file: reader = PyPDF2.PdfReader(file) text_content = "" for page in reader.pages: extracted_text = page.extract_text() if extracted_text: text_content += extracted_text + "\n" return text_content def summarize_content(source): if source.startswith(('http://', 'https://')): text_content = fetch_text_from_url(source) else: _, file_extension = os.path.splitext(source) if file_extension.lower() == '.pdf': text_content = read_pdf(source) elif file_extension.lower() in ['.txt', '.text']: text_content = read_text_file(source) else: print("Unsupported file type") return if text_content: summary = invoke_model(text_content) print("Summary of the document:") print(summary) else: print("No text found or unable to extract text from source.") if __name__ == '__main__': if len(sys.argv) < 2: print("Usage: python script.py <file_path_or_url>") else: source = sys.argv[1] summarize_content(source) ``` run txt file (assume txt is a.txt) ``` python pose_test.py a.txt ``` run url (assume txt is url) ``` python pose_test.py url ``` You can find both test results below on the section : Test Result1 ## 💻 Ollama Python Summarizing Test Code for the target lang response install all of these libraries ``` pip install requests beautifulsoup4 PyPDF2 googletrans==4.0.0-rc1 langchain-community langchain aiohttp asyncio aiofiles ``` pose_lang.py ``` import sys import os import aiohttp import PyPDF2 from bs4 import BeautifulSoup from langchain_community.chat_models import ChatOllama from langchain.schema import AIMessage, HumanMessage, SystemMessage from googletrans import Translator import logging import asyncio import aiofiles # Setup logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s') def clean_output(text): """Cleans the model output text.""" text = text.replace("</s>", "").strip() # Specific cleaning operation return text def translate_text(text, src_lang, dest_lang): """Translates text from source language to destination language using Google Translate.""" if src_lang == dest_lang: return text translator = Translator() try: translation = translator.translate(text, src=src_lang, dest=dest_lang) return translation.text except Exception as e: logging.error(f"Translation failed: {e}") return text def detect_language(text): """Detects the language of the given text.""" translator = Translator() try: detected = translator.detect(text) return detected.lang except Exception as e: logging.error(f"Language detection failed: {e}") return None async def invoke_model(text, target_lang): """Asynchronously invokes the chat model and processes the response with language-specific instructions.""" llm = ChatOllama(model="pose:latest") try: # Define messages based on target language if target_lang == 'ko': messages = [ SystemMessage(content='문서의 핵심 요약을 상세하게 제공해 주실 전문가로서, 다음 문서를 요약해 주세요.'), HumanMessage(content=f'다음 텍스트에 대한 전문적 요약을 제공해 주세요. 요약은 한국어의 언어적 뉘앙스에 맞게 최고 수준의 명확성과 세부 사항을 준수해야 합니다:\n\nTEXT: {text}') ] else: # default to English if not Korean messages = [ SystemMessage(content='As an adept summarizer, your expertise is required to condense the following document into its essential points in detail.'), HumanMessage(content=f'Kindly provide an expert summary of the text below, adhering to the highest standards of clarity and detail. Ensure the response is tailored to the linguistic nuances of English:\n\nTEXT: {text}') ] # Since invoke is not awaitable, run it in a thread if it's blocking response = await asyncio.to_thread(llm.invoke, messages) if isinstance(response, AIMessage): cleaned_content = clean_output(response.content) content_lang = detect_language(cleaned_content) print(f"Current content language: {content_lang}, Target language to be translated to: {target_lang}") if content_lang != target_lang: return translate_text(cleaned_content, content_lang, target_lang) return cleaned_content else: raise ValueError("Model did not return an AIMessage") except Exception as e: logging.error(f"Error during model invocation: {e}") return "Model invocation failed." async def fetch_text_from_url(url): """Asynchronously fetches and extracts text content from a given URL.""" async with aiohttp.ClientSession() as session: try: async with session.get(url) as response: content = await response.text() soup = BeautifulSoup(content, 'html.parser') main_content = soup.select_one('#mw-content-text, #bodyContent, .content') if not main_content: logging.error("No content found in the expected sections.") return None text_content = ' '.join(p.get_text() for p in main_content.find_all(['p', 'li'], string=True)) return text_content except Exception as e: logging.error(f"Error fetching URL content: {e}") return None async def read_text_file(file_path): """Asynchronously reads text from a text file.""" async with aiofiles.open(file_path, mode='r', encoding='utf-8') as file: text_content = await file.read() return text_content async def read_pdf(file_path): """Asynchronously reads text from a PDF file.""" def sync_read_pdf(path): try: with open(path, "rb") as file: reader = PyPDF2.PdfReader(file) return ' '.join(page.extract_text() for page in reader.pages if page.extract_text()) except Exception as e: logging.error(f"Error reading PDF file: {e}") return None return await asyncio.to_thread(sync_read_pdf, file_path) async def summarize_content(source, language): """Processes input source (URL, file, text) and outputs a summary in the specified language asynchronously.""" print("Processing input...") text_content = None if source.startswith(('http://', 'https://')): print("Fetching content from URL...") text_content = await fetch_text_from_url(source) elif os.path.isfile(source): _, file_extension = os.path.splitext(source) if file_extension.lower() == '.pdf': print("Reading PDF...") text_content = await read_pdf(source) elif file_extension.lower() in ['.txt', '.text']: print("Reading text file...") text_content = await read_text_file(source) else: print("Unsupported file type") return else: print("Unsupported file type") return if text_content: print("Summarizing content...") summary = await invoke_model(text_content, language) print("\n--- Summary of the document ---\n") print(summary) else: print("No text found or unable to extract text from source.") if __name__ == '__main__': if len(sys.argv) < 3: print("Usage: python script.py <file_path_or_url_or_text> <language>") print("Language should be 'ko' for Korean or 'en' for English.") else: source = sys.argv[1] language = sys.argv[2] asyncio.run(summarize_content(source, language)) ``` run txt file (assume txt is a.txt) ``` Korean response : python pose_lang a.txt ko English response : python pose_lang a.txt en ``` run pdf file (assume pdf is a.pdf) ``` Korean response : python pose_lang a.pdf ko English response : python pose_lang a.pdf en ``` run url (assume url is wikepedia) ``` Korean response : python pose_lang url ko English response : python pose_lang url en ``` I added additional Google Translator here. If you request an answer in Korean and the answer is in English sometimes for the lang hallucination, this function detects it and answers you in Korean. Conversely, if you request a response in English and the response is in Korean for the lang hallucination, this function detects it and responds in English. You can find both test results below on the section : Test Result2 for target lang response ### 🗞️ Configuration The YAML configuration for this model: ```yaml models: - model: NousResearch/Meta-Llama-3-8B # Base model providing a general foundation without specific parameters - model: NousResearch/Meta-Llama-3-8B-Instruct parameters: density: 0.60 weight: 0.25 - model: winglian/llama-3-8b-256k-PoSE parameters: density: 0.60 weight: 0.20 - model: Locutusque/Llama-3-Orca-1.0-8B parameters: density: 0.55 weight: 0.15 - model: abacusai/Llama-3-Smaug-8B parameters: density: 0.55 weight: 0.15 - model: beomi/Llama-3-Open-Ko-8B-Instruct-preview parameters: density: 0.55 weight: 0.30 merge_method: dare_ties base_model: NousResearch/Meta-Llama-3-8B parameters: int8_mask: true dtype: bfloat16 ``` Test OS Condition ``` Hardware Overview: Model Name: MacBook Pro Model Identifier: MacBookPro18,2 Chip: Apple M1 Max Total Number of Cores: 10 (8 performance and 2 efficiency) Memory: 64 GB System Firmware Version: 10151.101.3 OS Loader Version: 10151.101.3 ``` ### 🎊 Test Result1 (Normal) **SmartLlama-3-Ko-8B-256k-PoSE Summary Ability** consideration Long sentences seemed to summarize well, but I observed that answers came in English. And when I asked for it to be translated into Korean, I confirmed that it was translated well. The summary seems to work well, but you can take into account the fact that there are times when it cannot be summarized directly in Korean. ## Summary of Britney Spears on Wikipedia [![Britney Spears Singer Wikipedia Summary](https://i.ibb.co/2600HbV/Screenshot-2024-05-02-at-11-52-58-AM.png)](https://ibb.co/7zxxL9M) ## Summary of Steve Jobs Text File [![Steve Jobs Text File Summary](https://i.ibb.co/10tRCrj/Screenshot-2024-05-02-at-11-54-50-AM.png)](https://ibb.co/9pkyxbS) ## Summary of Jay Park on Wikipedia [![Jay Park Wikipedia Summary](https://i.ibb.co/nmkpbCt/Screenshot-2024-05-02-at-1-33-30-PM.png)](https://ibb.co/g9gY3Vh) ### 🎊 Test Result2 (Target Language Summary Return) **SmartLlama-3-Ko-8B-256k-PoSE Summary Ability** consideration I added additional Google Translator here. If you request an answer in Korean and the answer is in English, this function detects it and answers you in Korean. Conversely, if you request a response in English and the response is in Korean, this function detects it and responds in English. If you don't get a clear answer, try running it several times. ## Summary of economy pdf ``` python final2.py economy.pdf ko # if you want english summary, en ``` [![Economy pdf Summary](https://i.ibb.co/QftXyWQ/Screenshot-2024-05-02-at-9-05-51-PM.png)](https://ibb.co/JKgCDYt) ## Summary of Steve Jobs Text File ``` python final2.py steve.txt ko # if you want english summary, en ``` [![Steve Jobs Text File Summary](https://i.ibb.co/1nmqWxk/Screenshot-2024-05-02-at-8-57-20-PM.png)](https://ibb.co/PY6hH8d) ## Summary of Jay Park on Wikipedia ``` python final2.py https://en.wikipedia.org/wiki/Jay_Park ko # if you want english summary, en ``` [![Jay Park Wikipedia Summary](https://i.ibb.co/Ssk4tdY/Screenshot-2024-05-02-at-8-54-19-PM.png)](https://ibb.co/j6CPyW0) **Test Source From** [박재범 - wikipedia - EN](https://en.wikipedia.org/wiki/Jay_Park) [박재범 - wikipedia - KR](https://ko.wikipedia.org/wiki/%EB%B0%95%EC%9E%AC%EB%B2%94) [Britney Spears - wikipedia - EN](https://en.wikipedia.org/wiki/Britney_Spears) [한국은행 경제전망 보고서 - KR](https://www.bok.or.kr/viewer/skin/doc.html?fn=202402290251197820.pdf&rs=/webview/result/P0002359/202402) [Community member : Mr Han' steve jobs txt file] ### ⛑️ Test Issue 2024-05-02 ``` If you use load_summarize_chain(), there will be repetition. -> community member Mr.Han issue Is it a merge issue? He thinks the merge target may be the issue. chain = load_summarize_chain( llm, chain_type='stuff', prompt=prompt, verbose=False ) output_summary = chain.invoke(docs) -> investigating for me how to solve..... ``` ``` Mr.Han is investgating the symptoms Your OS is using REDHAT. Even if I run the code using the LLAMA3 model provided by ollama, there is an error. I wonder if I should wait a little longer for Red Hat... <|eot_id|><|start_header_id|>assistant<|end_header_id|>, ... omitted Ha ha, thanks for the chat! You too have a great day and happy summarizing if you need it again soon!<|eot_id|><|start_header_id|>assistant<|end_header_id|> It's not a merge problem... I think it's a fundamental problem that doesn't fit the OS environment... so I'm sharing it with you. Is there anyone who has the same problem as me in redhat? ```
babylm/babyllama-100m-2024
babylm
2024-05-13T01:09:04Z
412
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-13T00:57:06Z
Entry not found
PrunaAI/abacusai-Llama-3-Smaug-8B-GGUF-smashed
PrunaAI
2024-05-18T17:10:43Z
412
1
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-05-18T16:17:32Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.com/invite/vb6SmA3hxu) ## This repo contains GGUF versions of the abacusai/Llama-3-Smaug-8B model. # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.com/invite/vb6SmA3hxu) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: abacusai-Llama-3-Smaug-8B-GGUF-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download abacusai-Llama-3-Smaug-8B-GGUF-smashed Llama-3-Smaug-8B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download abacusai-Llama-3-Smaug-8B-GGUF-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download abacusai-Llama-3-Smaug-8B-GGUF-smashed Llama-3-Smaug-8B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m Llama-3-Smaug-8B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {{prompt\}} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./Llama-3-Smaug-8B.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {{prompt}} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./Llama-3-Smaug-8B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {{"role": "system", "content": "You are a story writing assistant."}}, {{ "role": "user", "content": "Write a story about llamas." }} ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
MLP-SEMO/semo_csr
MLP-SEMO
2024-05-31T08:19:06Z
412
0
transformers
[ "transformers", "safetensors", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-31T08:07:59Z
Entry not found
Shinroi/ppo-Huggy
Shinroi
2024-06-11T09:23:32Z
412
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-06-05T08:08:38Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Shinroi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF
UnclearPWR
2024-06-06T02:35:46Z
412
0
transformers
[ "transformers", "gguf", "climate", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:NuclearnAI/SPARK-mini-instruct", "endpoints_compatible", "region:us" ]
text-generation
2024-06-06T02:35:36Z
--- language: - en library_name: transformers tags: - climate - llama-cpp - gguf-my-repo base_model: NuclearnAI/SPARK-mini-instruct pipeline_tag: text-generation --- # UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF This model was converted to GGUF format from [`NuclearnAI/SPARK-mini-instruct`](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/NuclearnAI/SPARK-mini-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo UnclearPWR/SPARK-mini-instruct-Q8_0-GGUF --hf-file spark-mini-instruct-q8_0.gguf -c 2048 ```
llava-hf/LLaVA-NeXT-Video-7B-DPO-hf
llava-hf
2024-06-17T07:54:06Z
412
2
transformers
[ "transformers", "safetensors", "llava_next", "pretraining", "image-text-to-text", "en", "arxiv:2405.21075", "license:llama2", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-06-06T11:24:22Z
--- language: - en license: llama2 pipeline_tag: image-text-to-text --- # LLaVA-NeXT-Video Model Card Check out also the Google Colab demo to run Llava on a free-tier Google Colab instance: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CZggLHrjxMReG-FNOmqSOdi4z7NPq6SO?usp=sharing) Disclaimer: The team releasing LLaVa-NeXT-Video did not write a model card for this model so this model card has been written by the Hugging Face team. ## 📄 Model details **Model type:** LLaVA-Next-Video is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data. The model is buit on top of LLaVa-NeXT by tuning on a mix of video and image data to achieves better video understanding capabilities. The videos were sampled uniformly to be 32 frames per clip. The model is a current SOTA among open-source models on [VideoMME bench](https://arxiv.org/abs/2405.21075). Base LLM: [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) ![llava_next_video_arch](demo.png) **Model date:** LLaVA-Next-Video-7B was trained in April 2024. **Paper or resources for more information:** https://github.com/LLaVA-VL/LLaVA-NeXT ## 📚 Training dataset ### Image - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 500K academic-task-oriented VQA data mixture. - 50K GPT-4V data mixture. - 40K ShareGPT data. ### Video - 100K VideoChatGPT-Instruct. ## 📊 Evaluation dataset A collection of 4 benchmarks, including 3 academic VQA benchmarks and 1 captioning benchmark. ## 🚀 How to use the model First, make sure to have `transformers >= 4.42.0`. The model supports multi-visual and multi-prompt generation. Meaning that you can pass multiple images/videos in your prompt. Make sure also to follow the correct prompt template (`USER: xxx\nASSISTANT:`) and add the token `<image>` or `<video>` to the location where you want to query images/videos: Below is an example script to run generation in `float16` precision on a GPU device: ```python import av import torch from transformers import LlavaNextVideoProcessor, LlavaNextVideoForConditionalGeneration model_id = "llava-hf/LLaVA-NeXT-Video-7B-hf-DPO" model = LlavaNextVideoForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, ).to(0) processor = LlavaNextVideoProcessor.from_pretrained(model_id) def read_video_pyav(container, indices): ''' Decode the video with PyAV decoder. Args: container (`av.container.input.InputContainer`): PyAV container. indices (`List[int]`): List of frame indices to decode. Returns: result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ''' frames = [] container.seek(0) start_index = indices[0] end_index = indices[-1] for i, frame in enumerate(container.decode(video=0)): if i > end_index: break if i >= start_index and i in indices: frames.append(frame) return np.stack([x.to_ndarray(format="rgb24") for x in frames]) # define a chat histiry and use `apply_chat_template` to get correctly formatted prompt # Each value in "content" has to be a list of dicts with types ("text", "image", "video") conversation = [ { "role": "user", "content": [ {"type": "text", "text": "Why is this video funny?"}, {"type": "video"}, ], }, ] prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) video_path = hf_hub_download(repo_id="raushan-testing-hf/videos-test", filename="sample_demo_1.mp4", repo_type="dataset") container = av.open(video_path) # sample uniformly 8 frames from the video, can sample more for longer videos total_frames = container.streams.video[0].frames indices = np.arange(0, total_frames, total_frames / 8).astype(int) clip = read_video_pyav(container, indices) inputs_video = processor(text=prompt, videos=clip, padding=True, return_tensors="pt").to(model.device) output = model.generate(**inputs_video, max_new_tokens=100, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) ``` ### Inference with images as inputs To generate from images use the below code after loading the model as shown above: ```python import requests from PIL import Image image_file = "http://images.cocodataset.org/val2017/000000039769.jpg" conversation = [ { "role": "user", "content": [ {"type": "text", "text": "nWhat are these?"}, {"type": "image"}, ], } ] prompt = processor.apply_chat_template(conversation, add_generation_prompt=True) inputs_image = processor(prompt, images=raw_image, return_tensors='pt').to(0, torch.float16) output = model.generate(**inputs_image, max_new_tokens=100, do_sample=False) print(processor.decode(output[0][2:], skip_special_tokens=True)) ``` ### Inference with images and videos as inputs To generate from images and videos in one generate use the below code after loading the model as shown above: ```python conversation_1 = [ { "role": "user", "content": [ {"type": "text", "text": "What's the content of the image?"}, {"type": "image"}, ], } ] conversation_2 = [ { "role": "user", "content": [ {"type": "text", "text": "Why is this video funny?"}, {"type": "video"}, ], }, ] prompt_1 = processor.apply_chat_template(conversation, add_generation_prompt=True) prompt_2 = processor.apply_chat_template(conversation, add_generation_prompt=True) s = processor(text=[prompt_1, prompt_2], images=image, videos=clip, padding=True, return_tensors="pt").to(model.device) # Generate generate_ids = model.generate(**inputs, max_new_tokens=100) out = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False) print(out) ``` ### Model optimization #### 4-bit quantization through `bitsandbytes` library First make sure to install `bitsandbytes`, `pip install bitsandbytes` and make sure to have access to a CUDA compatible GPU device. Simply change the snippet above with: ```diff model = LlavaNextVideoForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + load_in_4bit=True ) ``` #### Use Flash-Attention 2 to further speed-up generation First make sure to install `flash-attn`. Refer to the [original repository of Flash Attention](https://github.com/Dao-AILab/flash-attention) regarding that package installation. Simply change the snippet above with: ```diff model = LlavaNextVideoForConditionalGeneration.from_pretrained( model_id, torch_dtype=torch.float16, low_cpu_mem_usage=True, + use_flash_attention_2=True ).to(0) ``` ## 🔒 License Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved. ## ✏️ Citation If you find our paper and code useful in your research: ```BibTeX @misc{zhang2024llavanextvideo, title={LLaVA-NeXT: A Strong Zero-shot Video Understanding Model}, url={https://llava-vl.github.io/blog/2024-04-30-llava-next-video/}, author={Zhang, Yuanhan and Li, Bo and Liu, haotian and Lee, Yong jae and Gui, Liangke and Fu, Di and Feng, Jiashi and Liu, Ziwei and Li, Chunyuan}, month={April}, year={2024} } ``` ```BibTeX @misc{liu2024llavanext, title={LLaVA-NeXT: Improved reasoning, OCR, and world knowledge}, url={https://llava-vl.github.io/blog/2024-01-30-llava-next/}, author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Li, Bo and Zhang, Yuanhan and Shen, Sheng and Lee, Yong Jae}, month={January}, year={2024} } ```
mradermacher/BagelMIsteryTour-8x7B-GGUF
mradermacher
2024-06-12T20:15:51Z
412
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:ycros/BagelMIsteryTour-8x7B", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-06-11T03:42:15Z
--- base_model: ycros/BagelMIsteryTour-8x7B language: - en library_name: transformers license: cc-by-nc-4.0 quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/ycros/BagelMIsteryTour-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.IQ3_XS.gguf) | IQ3_XS | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.IQ3_S.gguf) | IQ3_S | 20.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.IQ3_M.gguf) | IQ3_M | 21.5 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/BagelMIsteryTour-8x7B-GGUF/resolve/main/BagelMIsteryTour-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
SixOpen/Meta-Llama-3-8B-IQ3_M-GGUF
SixOpen
2024-06-11T12:38:43Z
412
0
null
[ "gguf", "facebook", "meta", "pytorch", "llama", "llama-3", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "region:us" ]
text-generation
2024-06-11T12:38:24Z
--- language: - en license: llama3 tags: - facebook - meta - pytorch - llama - llama-3 - llama-cpp - gguf-my-repo base_model: meta-llama/Meta-Llama-3-8B pipeline_tag: text-generation extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\ \ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\ \ use, reproduction, distribution and modification of the Llama Materials set forth\ \ herein.\n\"Documentation\" means the specifications, manuals and documentation\ \ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\ \"Licensee\" or \"you\" means you, or your employer or any other person or entity\ \ (if you are entering into this Agreement on such person or entity’s behalf), of\ \ the age required under applicable laws, rules or regulations to provide legal\ \ consent and that has legal authority to bind your employer or such other person\ \ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\ \ 3\" means the foundational large language models and software and algorithms,\ \ including machine-learning model code, trained model weights, inference-enabling\ \ code, training-enabling code, fine-tuning enabling code and other elements of\ \ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\ \"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\ \ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\ we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\ \ an entity, your principal place of business is in the EEA or Switzerland) and\ \ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\ \ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\ \ a non-exclusive, worldwide, non-transferable and royalty-free limited license\ \ under Meta’s intellectual property or other rights owned by Meta embodied in the\ \ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\ \ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\ \ If you distribute or make available the Llama Materials (or any derivative works\ \ thereof), or a product or service that uses any of them, including another AI\ \ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\ \ and (B) prominently display “Built with Meta Llama 3” on a related website, user\ \ interface, blogpost, about page, or product documentation. If you use the Llama\ \ Materials to create, train, fine tune, or otherwise improve an AI model, which\ \ is distributed or made available, you shall also include “Llama 3” at the beginning\ \ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\ \ works thereof, from a Licensee as part of an integrated end user product, then\ \ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\ \ copies of the Llama Materials that you distribute the following attribution notice\ \ within a “Notice” text file distributed as a part of such copies: “Meta Llama\ \ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\ \ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\ \ applicable laws and regulations (including trade compliance laws and regulations)\ \ and adhere to the Acceptable Use Policy for the Llama Materials (available at\ \ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\ \ into this Agreement.\nv. You will not use the Llama Materials or any output or\ \ results of the Llama Materials to improve any other large language model (excluding\ \ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\ \ on the Meta Llama 3 version release date, the monthly active users of the products\ \ or services made available by or for Licensee, or Licensee’s affiliates, is greater\ \ than 700 million monthly active users in the preceding calendar month, you must\ \ request a license from Meta, which Meta may grant to you in its sole discretion,\ \ and you are not authorized to exercise any of the rights under this Agreement\ \ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\ \ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\ \ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\ \ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\ \ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\ \ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\ \ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\ \ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\ \ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\ \ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\ \ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\ \ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\ 5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\ \ and in connection with the Llama Materials, neither Meta nor Licensee may use\ \ any name or mark owned by or associated with the other or any of its affiliates,\ \ except as required for reasonable and customary use in describing and redistributing\ \ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\ \ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\ \ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\ \ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\ \ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\ b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\ \ Meta, with respect to any derivative works and modifications of the Llama Materials\ \ that are made by you, as between you and Meta, you are and will be the owner of\ \ such derivative works and modifications.\nc. If you institute litigation or other\ \ proceedings against Meta or any entity (including a cross-claim or counterclaim\ \ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\ \ or any portion of any of the foregoing, constitutes infringement of intellectual\ \ property or other rights owned or licensable by you, then any licenses granted\ \ to you under this Agreement shall terminate as of the date such litigation or\ \ claim is filed or instituted. You will indemnify and hold harmless Meta from and\ \ against any claim by any third party arising out of or related to your use or\ \ distribution of the Llama Materials.\n6. Term and Termination. The term of this\ \ Agreement will commence upon your acceptance of this Agreement or access to the\ \ Llama Materials and will continue in full force and effect until terminated in\ \ accordance with the terms and conditions herein. Meta may terminate this Agreement\ \ if you are in breach of any term or condition of this Agreement. Upon termination\ \ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\ \ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\ \ and Jurisdiction. This Agreement will be governed and construed under the laws\ \ of the State of California without regard to choice of law principles, and the\ \ UN Convention on Contracts for the International Sale of Goods does not apply\ \ to this Agreement. The courts of California shall have exclusive jurisdiction\ \ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\ \ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\ \ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\ \ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\ #### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\ \ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\ \ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\ \ contribute to, encourage, plan, incite, or further illegal or unlawful activity\ \ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\ \ or harm to children, including the solicitation, creation, acquisition, or dissemination\ \ of child exploitative content or failure to report Child Sexual Abuse Material\n\ \ 3. Human trafficking, exploitation, and sexual violence\n 4. The\ \ illegal distribution of information or materials to minors, including obscene\ \ materials, or failure to employ legally required age-gating in connection with\ \ such information or materials.\n 5. Sexual solicitation\n 6. Any\ \ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\ \ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\ \ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\ \ or harmful conduct in the provision of employment, employment benefits, credit,\ \ housing, other economic benefits, or other essential goods and services\n 4.\ \ Engage in the unauthorized or unlicensed practice of any profession including,\ \ but not limited to, financial, legal, medical/health, or related professional\ \ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\ \ or other sensitive personal or private information about individuals without rights\ \ and consents required by applicable laws\n 6. Engage in or facilitate any action\ \ or generate any content that infringes, misappropriates, or otherwise violates\ \ any third-party rights, including the outputs or results of any products or services\ \ using the Llama Materials\n 7. Create, generate, or facilitate the creation\ \ of malicious code, malware, computer viruses or do anything else that could disable,\ \ overburden, interfere with or impair the proper working, integrity, operation\ \ or appearance of a website or computer system\n2. Engage in, promote, incite,\ \ facilitate, or assist in the planning or development of activities that present\ \ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\ \ to the following:\n 1. Military, warfare, nuclear industries or applications,\ \ espionage, use for materials or activities that are subject to the International\ \ Traffic Arms Regulations (ITAR) maintained by the United States Department of\ \ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\ \ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\ \ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\ \ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\ \ content intended to incite or promote violence, abuse, or any infliction of bodily\ \ harm to an individual\n3. Intentionally deceive or mislead others, including use\ \ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\ \ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\ \ or furthering defamatory content, including the creation of defamatory statements,\ \ images, or other content\n 3. Generating, promoting, or further distributing\ \ spam\n 4. Impersonating another individual without consent, authorization,\ \ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\ \ human-generated\n 6. Generating or facilitating false online engagement, including\ \ fake reviews and other means of fake online engagement\n4. Fail to appropriately\ \ disclose to end users any known dangers of your AI system\nPlease report any violation\ \ of this Policy, software “bug,” or other problems that could lead to a violation\ \ of this Policy through one of the following means:\n * Reporting issues with\ \ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\ \ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\ \ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\ \ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]" extra_gated_fields: First Name: text Last Name: text Date of birth: date_picker Country: country Affiliation: text geo: ip_location ? By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy : checkbox extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/). extra_gated_button_content: Submit --- # SixOpen/Meta-Llama-3-8B-IQ3_M-GGUF This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama --hf-repo SixOpen/Meta-Llama-3-8B-IQ3_M-GGUF --hf-file meta-llama-3-8b-iq3_m-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo SixOpen/Meta-Llama-3-8B-IQ3_M-GGUF --hf-file meta-llama-3-8b-iq3_m-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./main --hf-repo SixOpen/Meta-Llama-3-8B-IQ3_M-GGUF --hf-file meta-llama-3-8b-iq3_m-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./server --hf-repo SixOpen/Meta-Llama-3-8B-IQ3_M-GGUF --hf-file meta-llama-3-8b-iq3_m-imat.gguf -c 2048 ```
CHE-72/Baichuan2-7B-Chat-Q4_0-GGUF
CHE-72
2024-06-22T09:07:28Z
412
0
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "zh", "base_model:baichuan-inc/Baichuan2-7B-Chat", "region:us" ]
null
2024-06-22T09:07:09Z
--- base_model: baichuan-inc/Baichuan2-7B-Chat language: - en - zh license_name: baichuan2-community-license license_link: https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat/blob/main/Community%20License%20for%20Baichuan2%20Model.pdf tags: - llama-cpp - gguf-my-repo tasks: - text-generation --- # CHE-72/Baichuan2-7B-Chat-Q4_0-GGUF This model was converted to GGUF format from [`baichuan-inc/Baichuan2-7B-Chat`](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q4_0-GGUF --hf-file baichuan2-7b-chat-q4_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q4_0-GGUF --hf-file baichuan2-7b-chat-q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo CHE-72/Baichuan2-7B-Chat-Q4_0-GGUF --hf-file baichuan2-7b-chat-q4_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo CHE-72/Baichuan2-7B-Chat-Q4_0-GGUF --hf-file baichuan2-7b-chat-q4_0.gguf -c 2048 ```
NikolayKozloff/tabula-8b-IQ4_NL-GGUF
NikolayKozloff
2024-06-23T15:58:50Z
412
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:jpgard/t4-full", "base_model:mlfoundations/tabula-8b", "license:llama3", "region:us" ]
null
2024-06-23T15:58:28Z
--- base_model: mlfoundations/tabula-8b datasets: - jpgard/t4-full language: - en license: llama3 tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/tabula-8b-IQ4_NL-GGUF This model was converted to GGUF format from [`mlfoundations/tabula-8b`](https://huggingface.co/mlfoundations/tabula-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mlfoundations/tabula-8b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/tabula-8b-IQ4_NL-GGUF --hf-file tabula-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/tabula-8b-IQ4_NL-GGUF --hf-file tabula-8b-iq4_nl-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/tabula-8b-IQ4_NL-GGUF --hf-file tabula-8b-iq4_nl-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/tabula-8b-IQ4_NL-GGUF --hf-file tabula-8b-iq4_nl-imat.gguf -c 2048 ```
NikolayKozloff/llama3-tweety-8b-italian-Q5_0-GGUF
NikolayKozloff
2024-06-25T01:39:33Z
412
1
null
[ "gguf", "llama-cpp", "gguf-my-repo", "base_model:RiTA-nlp/llama3-tweety-8b-italian", "region:us" ]
null
2024-06-25T01:39:09Z
--- base_model: RiTA-nlp/llama3-tweety-8b-italian tags: - llama-cpp - gguf-my-repo --- # NikolayKozloff/llama3-tweety-8b-italian-Q5_0-GGUF This model was converted to GGUF format from [`RiTA-nlp/llama3-tweety-8b-italian`](https://huggingface.co/RiTA-nlp/llama3-tweety-8b-italian) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/RiTA-nlp/llama3-tweety-8b-italian) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q5_0-GGUF --hf-file llama3-tweety-8b-italian-q5_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q5_0-GGUF --hf-file llama3-tweety-8b-italian-q5_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q5_0-GGUF --hf-file llama3-tweety-8b-italian-q5_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo NikolayKozloff/llama3-tweety-8b-italian-Q5_0-GGUF --hf-file llama3-tweety-8b-italian-q5_0.gguf -c 2048 ```
ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune
ml6team
2022-05-16T11:41:05Z
411
11
transformers
[ "transformers", "pytorch", "mbart", "text2text-generation", "bart", "summarization", "nl", "dataset:ml6team/cnn_dailymail_nl", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: - nl tags: - mbart - bart - summarization datasets: - ml6team/cnn_dailymail_nl pipeline_tag: summarization widget: - text: 'Het jongetje werd eind april met zwaar letsel naar het ziekenhuis gebracht in Maastricht. Drie weken later overleed het kindje als gevolg van het letsel. Onderzoek moet nog uitwijzen wat voor verwondingen de baby precies had en hoe hij gewond is geraakt. Daarnaast doet de politie onderzoek in de woning van de ouders. Het is nog niet duidelijk wanneer de onderzoeken zijn afgerond, meldt 1Limburg. De verdachten zitten in beperkingen en mogen alleen contact hebben met hun advocaat.' - text: 'Volgens De Vries gaat het om "de hoogste beloning die ooit is uitgeloofd in Nederland". De stichting heeft een website waar donateurs geld kunnen storten, schrijft NH Nieuws. Volgens De Vries is dit initiatief ook bedoeld voor andere zaken waar beloningen voor een gouden tip worden uitgereikt. "Het is dus niet eenmalig", aldus De Vries. Het is de eerste keer dat zoiets wordt opgezet, stelt hij: De 18-jarige Tanja Groen verdween spoorloos tijdens de ontgroeningsweek van de Universiteit Maastricht in augustus 1993. Ze werd voor het laatst gezien nadat ze was vertrokken van een feestje. De studente zou vandaag 46 jaar zijn geworden. Ook de ouders van Groen waren op de persconferentie aanwezig. "Het is vandaag de verjaardag van Tanja Groen, die haar ouders al 27 jaar niet meer hebben kunnen vieren, omdat zij eind augustus 1993 spoorloos is verdwenen", zei De Vries. "Haar ouders zitten in tergende onzekerheid. Ze geloven dat ze niet meer leeft. Maar die ene promille vreet aan ze. Ze hebben recht op duidelijkheid. Ze komen op leeftijd. Grootste angst is nooit te weten wat er met hun kind is gebeurd." De Vries wil dat het miljoen binnen een jaar is ingezameld. Als het bedrag na een jaar lager uitkomt, dan is dat de uit te loven beloning. Is het meer, dan zal de rest van het geld gebruikt worden in beloningen in andere zaken. Het initiatief wordt gesteund door de politie en justitie. De afgelopen jaren is er vaker uitgebreid naar sporen van Tanja Groen gezocht, maar die zoekacties hebben niets concreets opgeleverd. Vorige week werd opnieuw naar de vrouw gezocht, op de Strabrechtse Heide in Noord-Brabant. Ook die zoektocht leverde niets op.' --- # mbart-large-cc25-cnn-dailymail-nl ## Model description Finetuned version of [mbart](https://huggingface.co/facebook/mbart-large-cc25). We also wrote a **blog post** about this model [here](https://blog.ml6.eu/why-we-open-sourced-two-dutch-summarization-datasets-1047445abc97) ## Intended uses & limitations It's meant for summarizing Dutch news articles. #### How to use ```python import transformers undisputed_best_model = transformers.MBartForConditionalGeneration.from_pretrained( "ml6team/mbart-large-cc25-cnn-dailymail-nl-finetune" ) tokenizer = transformers.MBartTokenizer.from_pretrained("facebook/mbart-large-cc25") summarization_pipeline = transformers.pipeline( task="summarization", model=undisputed_best_model, tokenizer=tokenizer, ) summarization_pipeline.model.config.decoder_start_token_id = tokenizer.lang_code_to_id[ "nl_XX" ] article = "Kan je dit even samenvatten alsjeblief." # Dutch summarization_pipeline( article, do_sample=True, top_p=0.75, top_k=50, # num_beams=4, min_length=50, early_stopping=True, truncation=True, )[0]["summary_text"] ``` ## Training data Finetuned [mbart](https://huggingface.co/facebook/mbart-large-cc25) with [this dataset](https://huggingface.co/datasets/ml6team/cnn_dailymail_nl) and another smaller dataset that we can't open source because we scraped it from the internet. For more information check out our blog post [here](https://blog.ml6.eu/).
projecte-aina/roberta-base-ca-v2-cased-sts
projecte-aina
2022-11-16T14:32:52Z
411
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "catalan", "semantic textual similarity", "sts-ca", "CaText", "Catalan Textual Corpus", "ca", "dataset:projecte-aina/sts-ca", "arxiv:1907.11692", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-06-30T07:55:48Z
--- pipeline_tag: text-classification language: - ca license: apache-2.0 tags: - "catalan" - "semantic textual similarity" - "sts-ca" - "CaText" - "Catalan Textual Corpus" datasets: - "projecte-aina/sts-ca" metrics: - "combined_score" model-index: - name: roberta-base-ca-v2-cased-sts results: - task: type: text-classification dataset: type: projecte-aina/sts-ca name: STS-ca metrics: - name: Combined score type: combined_score value: 0.7907 --- # Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for Semantic Textual Similarity. ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [Variable and metrics](#variable-and-metrics) - [Evaluation results](#evaluation-results) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-ca-v2-cased-sts** is a Semantic Textual Similarity (STS) model for the Catalan language fine-tuned from the [roberta-base-ca-v2](https://huggingface.co/projecte-aina/roberta-base-ca-v2) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details). ## Intended uses and limitations **roberta-base-ca-v2-cased-sts** model can be used to assess the similarity between two snippets of text. The model is limited by its training dataset and may not generalize well for all use cases. ## How to use To get the correct<sup>1</sup> model's prediction scores with values between 0.0 and 5.0, use the following code: ```python from transformers import pipeline, AutoTokenizer from scipy.special import logit model = 'projecte-aina/roberta-base-ca-v2-cased-sts' tokenizer = AutoTokenizer.from_pretrained(model) pipe = pipeline('text-classification', model=model, tokenizer=tokenizer) def prepare(sentence_pairs): sentence_pairs_prep = [] for s1, s2 in sentence_pairs: sentence_pairs_prep.append(f"{tokenizer.cls_token} {s1}{tokenizer.sep_token}{tokenizer.sep_token} {s2}{tokenizer.sep_token}") return sentence_pairs_prep sentence_pairs = [("El llibre va caure per la finestra.", "El llibre va sortir volant."), ("M'agrades.", "T'estimo."), ("M'agrada el sol i la calor", "A la Garrotxa plou molt.")] predictions = pipe(prepare(sentence_pairs), add_special_tokens=False) # convert back to scores to the original 0 and 5 interval for prediction in predictions: prediction['score'] = logit(prediction['score']) print(predictions) ``` Expected output: ``` [{'label': 'SIMILARITY', 'score': 2.118301674983813}, {'label': 'SIMILARITY', 'score': 2.1799755855125853}, {'label': 'SIMILARITY', 'score': 0.9511617858568939}] ``` <sup>1</sup> _**avoid using the widget** scores since they are normalized and do not reflect the original annotation values._ ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data We used the STS dataset in Catalan called [STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca) for training and evaluation. ### Training procedure The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set, and then evaluated it on the test set. ## Evaluation ### Variable and metrics This model was finetuned maximizing the average score between the Pearson and Spearman correlations. ## Evaluation results We evaluated the _roberta-base-ca-v2-cased-sts_ on the STS-ca test set against standard multilingual and monolingual baselines: | Model | STS-ca (Combined score) | | ------------|:-------------| | roberta-base-ca-v2-cased-sts | 79.07 | | roberta-base-ca-cased-sts | **80.19** | | mBERT | 74.26 | | XLM-RoBERTa | 61.61 | For more details, check the fine-tuning and evaluation scripts in the official [GitHub repository](https://github.com/projecte-aina/club). ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to [email protected] ### Copyright Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Citation Information If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
jeveuxaider/activity-classifier
jeveuxaider
2022-11-17T08:41:32Z
411
0
transformers
[ "transformers", "pytorch", "camembert", "text-classification", "autotrain", "fr", "dataset:jeveuxaider/activity-classifier", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-11-17T08:13:03Z
--- tags: - autotrain - text-classification language: - fr widget: - text: "Je participe à un accueil de jour" datasets: - jeveuxaider/activity-classifier co2_eq_emissions: emissions: 12.734050517307358 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 2096367492 - CO2 Emissions (in grams): 12.7341 ## Validation Metrics - Loss: 0.888 - Accuracy: 0.812 - Macro F1: 0.684 - Micro F1: 0.812 - Weighted F1: 0.808 - Macro Precision: 0.708 - Micro Precision: 0.812 - Weighted Precision: 0.813 - Macro Recall: 0.691 - Micro Recall: 0.812 - Weighted Recall: 0.812 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Je participe à un accueil de jour"}' https://api-inference.huggingface.co/models/jeveuxaider/activity-classifier ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("jeveuxaider/activity-classifier", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("jeveuxaider/activity-classifier", use_auth_token=True) inputs = tokenizer("Je participe à un accueil de jour", return_tensors="pt") outputs = model(**inputs) ```
timm/regnetx_064.pycls_in1k
timm
2024-02-10T23:32:52Z
411
0
timm
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "arxiv:2003.13678", "license:mit", "region:us" ]
image-classification
2023-03-21T06:32:36Z
--- license: mit library_name: timm tags: - image-classification - timm datasets: - imagenet-1k --- # Model card for regnetx_064.pycls_in1k A RegNetX-6.4GF image classification model. Pretrained on ImageNet-1k by paper authors. The `timm` RegNet implementation includes a number of enhancements not present in other implementations, including: * stochastic depth * gradient checkpointing * layer-wise LR decay * configurable output stride (dilation) * configurable activation and norm layers * option for a pre-activation bottleneck block used in RegNetV variant * only known RegNetZ model definitions with pretrained weights ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 26.2 - GMACs: 6.5 - Activations (M): 16.4 - Image size: 224 x 224 - **Papers:** - Designing Network Design Spaces: https://arxiv.org/abs/2003.13678 - **Dataset:** ImageNet-1k - **Original:** https://github.com/facebookresearch/pycls ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('regnetx_064.pycls_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnetx_064.pycls_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 32, 112, 112]) # torch.Size([1, 168, 56, 56]) # torch.Size([1, 392, 28, 28]) # torch.Size([1, 784, 14, 14]) # torch.Size([1, 1624, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'regnetx_064.pycls_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 1624, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). For the comparison summary below, the ra_in1k, ra3_in1k, ch_in1k, sw_*, and lion_* tagged weights are trained in `timm`. |model |img_size|top1 |top5 |param_count|gmacs|macts | |-------------------------|--------|------|------|-----------|-----|------| |[regnety_1280.swag_ft_in1k](https://huggingface.co/timm/regnety_1280.swag_ft_in1k)|384 |88.228|98.684|644.81 |374.99|210.2 | |[regnety_320.swag_ft_in1k](https://huggingface.co/timm/regnety_320.swag_ft_in1k)|384 |86.84 |98.364|145.05 |95.0 |88.87 | |[regnety_160.swag_ft_in1k](https://huggingface.co/timm/regnety_160.swag_ft_in1k)|384 |86.024|98.05 |83.59 |46.87|67.67 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|288 |86.004|97.83 |83.59 |26.37|38.07 | |[regnety_1280.swag_lc_in1k](https://huggingface.co/timm/regnety_1280.swag_lc_in1k)|224 |85.996|97.848|644.81 |127.66|71.58 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|288 |85.982|97.844|83.59 |26.37|38.07 | |[regnety_160.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.sw_in12k_ft_in1k)|224 |85.574|97.666|83.59 |15.96|23.04 | |[regnety_160.lion_in12k_ft_in1k](https://huggingface.co/timm/regnety_160.lion_in12k_ft_in1k)|224 |85.564|97.674|83.59 |15.96|23.04 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|288 |85.398|97.584|51.82 |20.06|35.34 | |[regnety_2560.seer_ft_in1k](https://huggingface.co/timm/regnety_2560.seer_ft_in1k)|384 |85.15 |97.436|1282.6 |747.83|296.49| |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|320 |85.036|97.268|57.7 |15.46|63.94 | |[regnety_120.sw_in12k_ft_in1k](https://huggingface.co/timm/regnety_120.sw_in12k_ft_in1k)|224 |84.976|97.416|51.82 |12.14|21.38 | |[regnety_320.swag_lc_in1k](https://huggingface.co/timm/regnety_320.swag_lc_in1k)|224 |84.56 |97.446|145.05 |32.34|30.26 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|320 |84.496|97.004|28.94 |6.43 |37.94 | |[regnetz_e8.ra3_in1k](https://huggingface.co/timm/regnetz_e8.ra3_in1k)|256 |84.436|97.02 |57.7 |9.91 |40.94 | |[regnety_1280.seer_ft_in1k](https://huggingface.co/timm/regnety_1280.seer_ft_in1k)|384 |84.432|97.092|644.81 |374.99|210.2 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|320 |84.246|96.93 |27.12 |6.35 |37.78 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|320 |84.054|96.992|23.37 |6.19 |37.08 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|320 |84.038|96.992|23.46 |7.03 |38.92 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|320 |84.022|96.866|27.58 |9.33 |37.08 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|288 |83.932|96.888|39.18 |13.22|29.69 | |[regnety_640.seer_ft_in1k](https://huggingface.co/timm/regnety_640.seer_ft_in1k)|384 |83.912|96.924|281.38 |188.47|124.83| |[regnety_160.swag_lc_in1k](https://huggingface.co/timm/regnety_160.swag_lc_in1k)|224 |83.778|97.286|83.59 |15.96|23.04 | |[regnetz_040_h.ra3_in1k](https://huggingface.co/timm/regnetz_040_h.ra3_in1k)|256 |83.776|96.704|28.94 |4.12 |24.29 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|288 |83.72 |96.75 |30.58 |10.55|27.11 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|288 |83.718|96.724|30.58 |10.56|27.11 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|288 |83.69 |96.778|83.59 |26.37|38.07 | |[regnetz_040.ra3_in1k](https://huggingface.co/timm/regnetz_040.ra3_in1k)|256 |83.62 |96.704|27.12 |4.06 |24.19 | |[regnetz_d8.ra3_in1k](https://huggingface.co/timm/regnetz_d8.ra3_in1k)|256 |83.438|96.776|23.37 |3.97 |23.74 | |[regnetz_d32.ra3_in1k](https://huggingface.co/timm/regnetz_d32.ra3_in1k)|256 |83.424|96.632|27.58 |5.98 |23.74 | |[regnetz_d8_evos.ch_in1k](https://huggingface.co/timm/regnetz_d8_evos.ch_in1k)|256 |83.36 |96.636|23.46 |4.5 |24.92 | |[regnety_320.seer_ft_in1k](https://huggingface.co/timm/regnety_320.seer_ft_in1k)|384 |83.35 |96.71 |145.05 |95.0 |88.87 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|288 |83.204|96.66 |20.64 |6.6 |20.3 | |[regnety_320.tv2_in1k](https://huggingface.co/timm/regnety_320.tv2_in1k)|224 |83.162|96.42 |145.05 |32.34|30.26 | |[regnety_080.ra3_in1k](https://huggingface.co/timm/regnety_080.ra3_in1k)|224 |83.16 |96.486|39.18 |8.0 |17.97 | |[regnetv_064.ra3_in1k](https://huggingface.co/timm/regnetv_064.ra3_in1k)|224 |83.108|96.458|30.58 |6.39 |16.41 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|288 |83.044|96.5 |20.65 |6.61 |20.3 | |[regnety_064.ra3_in1k](https://huggingface.co/timm/regnety_064.ra3_in1k)|224 |83.02 |96.292|30.58 |6.39 |16.41 | |[regnety_160.deit_in1k](https://huggingface.co/timm/regnety_160.deit_in1k)|224 |82.974|96.502|83.59 |15.96|23.04 | |[regnetx_320.tv2_in1k](https://huggingface.co/timm/regnetx_320.tv2_in1k)|224 |82.816|96.208|107.81 |31.81|36.3 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|288 |82.742|96.418|19.44 |5.29 |18.61 | |[regnety_160.tv2_in1k](https://huggingface.co/timm/regnety_160.tv2_in1k)|224 |82.634|96.22 |83.59 |15.96|23.04 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|320 |82.634|96.472|13.49 |3.86 |25.88 | |[regnety_080_tv.tv2_in1k](https://huggingface.co/timm/regnety_080_tv.tv2_in1k)|224 |82.592|96.246|39.38 |8.51 |19.73 | |[regnetx_160.tv2_in1k](https://huggingface.co/timm/regnetx_160.tv2_in1k)|224 |82.564|96.052|54.28 |15.99|25.52 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|320 |82.51 |96.358|13.46 |3.92 |25.88 | |[regnetv_040.ra3_in1k](https://huggingface.co/timm/regnetv_040.ra3_in1k)|224 |82.44 |96.198|20.64 |4.0 |12.29 | |[regnety_040.ra3_in1k](https://huggingface.co/timm/regnety_040.ra3_in1k)|224 |82.304|96.078|20.65 |4.0 |12.29 | |[regnetz_c16.ra3_in1k](https://huggingface.co/timm/regnetz_c16.ra3_in1k)|256 |82.16 |96.048|13.46 |2.51 |16.57 | |[regnetz_c16_evos.ch_in1k](https://huggingface.co/timm/regnetz_c16_evos.ch_in1k)|256 |81.936|96.15 |13.49 |2.48 |16.57 | |[regnety_032.ra_in1k](https://huggingface.co/timm/regnety_032.ra_in1k)|224 |81.924|95.988|19.44 |3.2 |11.26 | |[regnety_032.tv2_in1k](https://huggingface.co/timm/regnety_032.tv2_in1k)|224 |81.77 |95.842|19.44 |3.2 |11.26 | |[regnetx_080.tv2_in1k](https://huggingface.co/timm/regnetx_080.tv2_in1k)|224 |81.552|95.544|39.57 |8.02 |14.06 | |[regnetx_032.tv2_in1k](https://huggingface.co/timm/regnetx_032.tv2_in1k)|224 |80.924|95.27 |15.3 |3.2 |11.37 | |[regnety_320.pycls_in1k](https://huggingface.co/timm/regnety_320.pycls_in1k)|224 |80.804|95.246|145.05 |32.34|30.26 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|288 |80.712|95.47 |9.72 |2.39 |16.43 | |[regnety_016.tv2_in1k](https://huggingface.co/timm/regnety_016.tv2_in1k)|224 |80.66 |95.334|11.2 |1.63 |8.04 | |[regnety_120.pycls_in1k](https://huggingface.co/timm/regnety_120.pycls_in1k)|224 |80.37 |95.12 |51.82 |12.14|21.38 | |[regnety_160.pycls_in1k](https://huggingface.co/timm/regnety_160.pycls_in1k)|224 |80.288|94.964|83.59 |15.96|23.04 | |[regnetx_320.pycls_in1k](https://huggingface.co/timm/regnetx_320.pycls_in1k)|224 |80.246|95.01 |107.81 |31.81|36.3 | |[regnety_080.pycls_in1k](https://huggingface.co/timm/regnety_080.pycls_in1k)|224 |79.882|94.834|39.18 |8.0 |17.97 | |[regnetz_b16.ra3_in1k](https://huggingface.co/timm/regnetz_b16.ra3_in1k)|224 |79.872|94.974|9.72 |1.45 |9.95 | |[regnetx_160.pycls_in1k](https://huggingface.co/timm/regnetx_160.pycls_in1k)|224 |79.862|94.828|54.28 |15.99|25.52 | |[regnety_064.pycls_in1k](https://huggingface.co/timm/regnety_064.pycls_in1k)|224 |79.716|94.772|30.58 |6.39 |16.41 | |[regnetx_120.pycls_in1k](https://huggingface.co/timm/regnetx_120.pycls_in1k)|224 |79.592|94.738|46.11 |12.13|21.37 | |[regnetx_016.tv2_in1k](https://huggingface.co/timm/regnetx_016.tv2_in1k)|224 |79.44 |94.772|9.19 |1.62 |7.93 | |[regnety_040.pycls_in1k](https://huggingface.co/timm/regnety_040.pycls_in1k)|224 |79.23 |94.654|20.65 |4.0 |12.29 | |[regnetx_080.pycls_in1k](https://huggingface.co/timm/regnetx_080.pycls_in1k)|224 |79.198|94.55 |39.57 |8.02 |14.06 | |[regnetx_064.pycls_in1k](https://huggingface.co/timm/regnetx_064.pycls_in1k)|224 |79.064|94.454|26.21 |6.49 |16.37 | |[regnety_032.pycls_in1k](https://huggingface.co/timm/regnety_032.pycls_in1k)|224 |78.884|94.412|19.44 |3.2 |11.26 | |[regnety_008_tv.tv2_in1k](https://huggingface.co/timm/regnety_008_tv.tv2_in1k)|224 |78.654|94.388|6.43 |0.84 |5.42 | |[regnetx_040.pycls_in1k](https://huggingface.co/timm/regnetx_040.pycls_in1k)|224 |78.482|94.24 |22.12 |3.99 |12.2 | |[regnetx_032.pycls_in1k](https://huggingface.co/timm/regnetx_032.pycls_in1k)|224 |78.178|94.08 |15.3 |3.2 |11.37 | |[regnety_016.pycls_in1k](https://huggingface.co/timm/regnety_016.pycls_in1k)|224 |77.862|93.73 |11.2 |1.63 |8.04 | |[regnetx_008.tv2_in1k](https://huggingface.co/timm/regnetx_008.tv2_in1k)|224 |77.302|93.672|7.26 |0.81 |5.15 | |[regnetx_016.pycls_in1k](https://huggingface.co/timm/regnetx_016.pycls_in1k)|224 |76.908|93.418|9.19 |1.62 |7.93 | |[regnety_008.pycls_in1k](https://huggingface.co/timm/regnety_008.pycls_in1k)|224 |76.296|93.05 |6.26 |0.81 |5.25 | |[regnety_004.tv2_in1k](https://huggingface.co/timm/regnety_004.tv2_in1k)|224 |75.592|92.712|4.34 |0.41 |3.89 | |[regnety_006.pycls_in1k](https://huggingface.co/timm/regnety_006.pycls_in1k)|224 |75.244|92.518|6.06 |0.61 |4.33 | |[regnetx_008.pycls_in1k](https://huggingface.co/timm/regnetx_008.pycls_in1k)|224 |75.042|92.342|7.26 |0.81 |5.15 | |[regnetx_004_tv.tv2_in1k](https://huggingface.co/timm/regnetx_004_tv.tv2_in1k)|224 |74.57 |92.184|5.5 |0.42 |3.17 | |[regnety_004.pycls_in1k](https://huggingface.co/timm/regnety_004.pycls_in1k)|224 |74.018|91.764|4.34 |0.41 |3.89 | |[regnetx_006.pycls_in1k](https://huggingface.co/timm/regnetx_006.pycls_in1k)|224 |73.862|91.67 |6.2 |0.61 |3.98 | |[regnetx_004.pycls_in1k](https://huggingface.co/timm/regnetx_004.pycls_in1k)|224 |72.38 |90.832|5.16 |0.4 |3.14 | |[regnety_002.pycls_in1k](https://huggingface.co/timm/regnety_002.pycls_in1k)|224 |70.282|89.534|3.16 |0.2 |2.17 | |[regnetx_002.pycls_in1k](https://huggingface.co/timm/regnetx_002.pycls_in1k)|224 |68.752|88.556|2.68 |0.2 |2.16 | ## Citation ```bibtex @InProceedings{Radosavovic2020, title = {Designing Network Design Spaces}, author = {Ilija Radosavovic and Raj Prateek Kosaraju and Ross Girshick and Kaiming He and Piotr Doll{'a}r}, booktitle = {CVPR}, year = {2020} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
navervision/CompoDiff-Aesthetic
navervision
2023-04-27T11:40:22Z
411
2
transformers
[ "transformers", "pytorch", "CompoDiff", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-04-20T05:52:12Z
--- license: apache-2.0 --- https://github.com/navervision/CompoDiff
liuhaotian/LLaVA-Lightning-7B-delta-v1-1
liuhaotian
2023-05-03T01:08:14Z
411
14
transformers
[ "transformers", "pytorch", "llava", "text-generation", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-generation
2023-05-03T00:16:20Z
--- license: apache-2.0 inference: false --- **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual LLaVA weights. See https://github.com/haotian-liu/LLaVA#llava-weights for instructions. <br> <br> # LLaVA Model Card ## Model details **Model type:** LLaVA is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. It is an auto-regressive language model, based on the transformer architecture. **Model date:** LLaVA-Lightning was trained in May 2023. **Paper or resources for more information:** https://llava-vl.github.io/ **License:** Apache License 2.0 **Where to send questions or comments about the model:** https://github.com/haotian-liu/LLaVA/issues ## Intended use **Primary intended uses:** The primary use of LLaVA is research on large multimodal models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence. ## Training dataset 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. 80K GPT-generated multimodal instruction-following data. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 90 visual reasoning questions from 30 unique images randomly sampled from COCO val 2014 and each is associated with three types of questions: conversational, detailed description, and complex reasoning. We utilize GPT-4 to judge the model outputs. We also evaluate our model on the ScienceQA dataset. Our synergy with GPT-4 sets a new state-of-the-art on the dataset. See https://llava-vl.github.io/ for more details.
TheBloke/Marcoroni-13B-GGUF
TheBloke
2023-09-27T12:48:50Z
411
2
transformers
[ "transformers", "gguf", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "base_model:AIDC-ai-business/Marcoroni-13B", "license:cc-by-nc-4.0", "text-generation-inference", "region:us" ]
text-generation
2023-09-11T20:39:04Z
--- language: - en license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca model_name: Marcoroni 13B base_model: AIDC-ai-business/Marcoroni-13B inference: false model_creator: AIDC-ai-business model_type: llama pipeline_tag: text-generation prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # Marcoroni 13B - GGUF - Model creator: [AIDC-ai-business](https://huggingface.co/AIDC-ai-business) - Original model: [Marcoroni 13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) <!-- description start --> ## Description This repo contains GGUF format model files for [AIDC-ai-business's Marcoroni 13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. GGUF offers numerous advantages over GGML, such as better tokenisation, and support for special tokens. It is also supports metadata, and is designed to be extensible. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Marcoroni-13B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Marcoroni-13B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF) * [AIDC-ai-business's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/AIDC-ai-business/Marcoroni-13B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [AIDC-ai-business's Marcoroni 13B](https://huggingface.co/AIDC-ai-business/Marcoroni-13B). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [marcoroni-13b.Q2_K.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [marcoroni-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [marcoroni-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [marcoroni-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [marcoroni-13b.Q4_0.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [marcoroni-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [marcoroni-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [marcoroni-13b.Q5_0.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [marcoroni-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [marcoroni-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [marcoroni-13b.Q6_K.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [marcoroni-13b.Q8_0.gguf](https://huggingface.co/TheBloke/Marcoroni-13B-GGUF/blob/main/marcoroni-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/Marcoroni-13B-GGUF and below it, a specific filename to download, such as: marcoroni-13b.q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub>=0.17.1 ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/Marcoroni-13B-GGUF marcoroni-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/Marcoroni-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Marcoroni-13B-GGUF marcoroni-13b.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d36d5be95a0d9088b674dbb27354107221](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m marcoroni-13b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model from Python using ctransformers #### First install the package ```bash # Base ctransformers with no GPU acceleration pip install ctransformers>=0.2.24 # Or with CUDA GPU acceleration pip install ctransformers[cuda]>=0.2.24 # Or with ROCm GPU acceleration CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers ``` #### Simple example code to load one of these GGUF models ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/Marcoroni-13B-GGUF", model_file="marcoroni-13b.q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here's guides on using llama-cpp-python or ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: AIDC-ai-business's Marcoroni 13B # Marcoroni-13B Fine-tuned from Llama2-13B,we use Orca-style data and other open source data for fine-tuning. # Model Details * **Trained by**: trained by AIDC AI-Business. * **Model type:** **Marcoroni-13B** is an auto-regressive language model based on the Llama 2 transformer architecture. * **Language(s)**: English * **License for Marcoroni-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | 65.76 | | ARC (25-shot) | 62.46 | | HellaSwag (10-shot) | 83.27 | | MMLU (5-shot) | 59.63 | | TruthfulQA (0-shot) | 57.7 | <!-- original-model-card end -->
vgaraujov/t5-base-spanish
vgaraujov
2024-03-06T19:45:09Z
411
4
transformers
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "text-generation-inference", "es", "dataset:large_spanish_corpus", "dataset:oscar-corpus/OSCAR-2109", "dataset:bertin-project/mc4-es-sampled", "arxiv:2309.11259", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2023-09-19T22:37:47Z
--- license: apache-2.0 datasets: - large_spanish_corpus - oscar-corpus/OSCAR-2109 - bertin-project/mc4-es-sampled language: - es tags: - text-generation-inference --- # T5S (base-sized model) T5S model pre-trained on Spanish language. It was introduced in the paper [Sequence-to-Sequence Spanish Pre-trained Language Models](https://arxiv.org/abs/2309.11259). ## Model description T5S is a T5 Version 1.1 model (transformer encoder-decoder) with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder, which includes the following improvements compared to the original T5 model: - GEGLU activation in feed-forward hidden layer, rather than ReLU. - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained only on unlabeled corpus without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer T5S is particularly effective when fine-tuned for text generation (e.g. summarization, translation) or comprehension tasks (e.g. text classification, question answering) using text-to-text format. ### How to use Here is how to use this model in PyTorch: ```python from transformers import T5Tokenizer, T5Model tokenizer = T5Tokenizer.from_pretrained("vgaraujov/t5-base-spanish") model = T5Model.from_pretrained("vgaraujov/t5-base-spanish") input_ids = tokenizer( "Estudios han demostrado que tener un perro es bueno para la salud", return_tensors="pt" ).input_ids # Batch size 1 decoder_input_ids = tokenizer("Estudios demuestran que", return_tensors="pt").input_ids # Batch size 1 # forward pass outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) last_hidden_states = outputs.last_hidden_state ``` ### Citation (BibTeX) ```bibtex @misc{araujo2023sequencetosequence, title={Sequence-to-Sequence Spanish Pre-trained Language Models}, author={Vladimir Araujo and Maria Mihaela Trusca and Rodrigo Tufiño and Marie-Francine Moens}, year={2023}, eprint={2309.11259}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ChrisWilson011016/5CXB7iUnB35uDzvxTP5dss1XhTxBxoeyzMma45Cem7gA84yF_vgg
ChrisWilson011016
2024-03-04T19:09:08Z
411
0
keras
[ "keras", "region:us" ]
null
2024-02-29T13:10:35Z
Entry not found
Crataco/distilgpt2-82M-GGUF
Crataco
2024-03-21T07:27:43Z
411
1
null
[ "gguf", "exbert", "en", "dataset:openwebtext", "arxiv:1910.01108", "arxiv:2201.08542", "arxiv:2203.12574", "arxiv:1910.09700", "arxiv:1503.02531", "license:apache-2.0", "model-index", "co2_eq_emissions", "region:us" ]
null
2024-03-18T08:22:00Z
--- language: en tags: - exbert license: apache-2.0 datasets: - openwebtext model-index: - name: distilgpt2 results: - task: type: text-generation name: Text Generation dataset: type: wikitext name: WikiText-103 metrics: - type: perplexity name: Perplexity value: 21.1 co2_eq_emissions: 149200 --- distilgpt2, available in as many GGUF quantization levels as possible as of March 18th, 2024. Importance matrix was used with context set to 1,024, created with [Kalomaze's "groups_merged.txt"](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). Don't ask why I did this. |Quantization|HellaSwag| |--------|--------| |IQ1_S |25.3% | |IQ2_XXS |27.3% | |IQ2_XS |27.5% | |IQ2_S |26.6% | |IQ2_M |27.4% | |Q2_K_S |24.4% | |Q2_K |27.3% | |IQ3_XXS |27.1% | |IQ3_XS |26.6% | |IQ3_S |26.2% | |IQ3_M |26.2% | |Q3_K_S |27.8% | |Q3_K_M |26.2% | |IQ4_XS |27.1% | |IQ4_NL |27.1% | |Q4_0 |26.3% | |Q4_K_M |26.8% | |Q5_0 |27.2% | |Q5_1 |26.7% | |Q5_K_M |26.8% | |Q6_K |26.7% | |Q8_0 |26.5% | Original model card below. *** # DistilGPT2 DistilGPT2 (short for Distilled-GPT2) is an English-language model pre-trained with the supervision of the smallest version of Generative Pre-trained Transformer 2 (GPT-2). Like GPT-2, DistilGPT2 can be used to generate text. Users of this model card should also consider information about the design, training, and limitations of [GPT-2](https://huggingface.co/gpt2). ## Model Details - **Developed by:** Hugging Face - **Model type:** Transformer-based Language Model - **Language:** English - **License:** Apache 2.0 - **Model Description:** DistilGPT2 is an English-language model pre-trained with the supervision of the 124 million parameter version of GPT-2. DistilGPT2, which has 82 million parameters, was developed using [knowledge distillation](#knowledge-distillation) and was designed to be a faster, lighter version of GPT-2. - **Resources for more information:** See [this repository](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) for more about Distil\* (a class of compressed models including Distilled-GPT2), [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108) for more information about knowledge distillation and the training procedure, and this page for more about [GPT-2](https://openai.com/blog/better-language-models/). ## Uses, Limitations and Risks #### Limitations and Risks <details> <summary>Click to expand</summary> **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** As the developers of GPT-2 (OpenAI) note in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md), “language models like GPT-2 reflect the biases inherent to the systems they were trained on.” Significant research has explored bias and fairness issues with models for language generation including GPT-2 (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). DistilGPT2 also suffers from persistent bias issues, as highlighted in the demonstrative examples below. Note that these examples are not a comprehensive stress-testing of the model. Readers considering using the model should consider more rigorous evaluations of the model depending on their use case and context. The impact of model compression techniques – such as knowledge distillation – on bias and fairness issues associated with language models is an active area of research. For example: - [Silva, Tambwekar and Gombolay (2021)](https://aclanthology.org/2021.naacl-main.189.pdf) find that distilled versions of BERT and RoBERTa consistently exhibit statistically significant bias (with regard to gender and race) with effect sizes larger than the teacher models. - [Xu and Hu (2022)](https://arxiv.org/pdf/2201.08542.pdf) find that distilled versions of GPT-2 showed consistent reductions in toxicity and bias compared to the teacher model (see the paper for more detail on metrics used to define/measure toxicity and bias). - [Gupta et al. (2022)](https://arxiv.org/pdf/2203.12574.pdf) find that DistilGPT2 exhibits greater gender disparities than GPT-2 and propose a technique for mitigating gender bias in distilled language models like DistilGPT2. ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='distilgpt2') >>> set_seed(48) >>> generator("The White man worked as a", max_length=20, num_return_sequences=3) [{'generated_text': "The White man worked as a salesman at a McDonald's restaurant called Kia at the time of the"}, {'generated_text': 'The White man worked as a contractor in the Army in the late 1990s. He became a "'}, {'generated_text': 'The White man worked as a police spokesman to the US Navy in the 1930s.'}] >>> set_seed(48) >>> generator("The Black man worked as a", max_length=20, num_return_sequences=3) [{'generated_text': 'The Black man worked as a shop assistant for an hour at Wal-Mart at Wal-Mart in'}, {'generated_text': 'The Black man worked as a waiter in the hotel when he was assaulted when he got out of a'}, {'generated_text': 'The Black man worked as a police spokesman four months ago...'}] ``` </details> #### Potential Uses Since DistilGPT2 is a distilled version of GPT-2, it is intended to be used for similar use cases with the increased functionality of being smaller and easier to run than the base model. The developers of GPT-2 state in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) that they envisioned GPT-2 would be used by researchers to better understand large-scale generative language models, with possible secondary use cases including: > - *Writing assistance: Grammar assistance, autocompletion (for normal prose or code)* > - *Creative writing and art: exploring the generation of creative, fictional texts; aiding creation of poetry and other literary art.* > - *Entertainment: Creation of games, chat bots, and amusing generations.* Using DistilGPT2, the Hugging Face team built the [Write With Transformers](https://transformer.huggingface.co/doc/distil-gpt2) web app, which allows users to play with the model to generate text directly from their browser. #### Out-of-scope Uses OpenAI states in the GPT-2 [model card](https://github.com/openai/gpt-2/blob/master/model_card.md): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do not recommend that they be deployed into systems that interact with humans unless the deployers first carry out a study of biases relevant to the intended use-case. ### How to Get Started with the Model <details> <summary>Click to expand</summary> *Be sure to read the sections on in-scope and out-of-scope uses and limitations of the model for further information on how to use the model.* Using DistilGPT2 is similar to using GPT-2. DistilGPT2 can be used directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='distilgpt2') >>> set_seed(42) >>> generator("Hello, I’m a language model", max_length=20, num_return_sequences=5) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. [{'generated_text': "Hello, I'm a language model, I'm a language model. In my previous post I've"}, {'generated_text': "Hello, I'm a language model, and I'd love to hear what you think about it."}, {'generated_text': "Hello, I'm a language model, but I don't get much of a connection anymore, so"}, {'generated_text': "Hello, I'm a language model, a functional language... It's not an example, and that"}, {'generated_text': "Hello, I'm a language model, not an object model.\n\nIn a nutshell, I"}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') model = GPT2Model.from_pretrained('distilgpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` And in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') model = TFGPT2Model.from_pretrained('distilgpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` </details> ## Training Data DistilGPT2 was trained using [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), an open-source reproduction of OpenAI’s WebText dataset, which was used to train GPT-2. See the [OpenWebTextCorpus Dataset Card](https://huggingface.co/datasets/openwebtext) for additional information about OpenWebTextCorpus and [Radford et al. (2019)](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) for additional information about WebText. ## Training Procedure The texts were tokenized using the same tokenizer as GPT-2, a byte-level version of Byte Pair Encoding (BPE). DistilGPT2 was trained using knowledge distillation, following a procedure similar to the training procedure for DistilBERT, described in more detail in [Sanh et al. (2019)](https://arxiv.org/abs/1910.01108). ## Evaluation Results The creators of DistilGPT2 [report](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) that, on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT-2 reaches a perplexity on the test set of 16.3 compared to 21.1 for DistilGPT2 (after fine-tuning on the train set). ## Environmental Impact *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* - **Hardware Type:** 8 16GB V100 - **Hours used:** 168 (1 week) - **Cloud Provider:** Azure - **Compute Region:** unavailable, assumed East US for calculations - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 149.2 kg eq. CO2 ## Citation ```bibtex @inproceedings{sanh2019distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, booktitle={NeurIPS EMC^2 Workshop}, year={2019} } ``` ## Glossary - <a name="knowledge-distillation">**Knowledge Distillation**</a>: As described in [Sanh et al. (2019)](https://arxiv.org/pdf/1910.01108.pdf), “knowledge distillation is a compression technique in which a compact model – the student – is trained to reproduce the behavior of a larger model – the teacher – or an ensemble of models.” Also see [Bucila et al. (2006)](https://www.cs.cornell.edu/~caruana/compression.kdd06.pdf) and [Hinton et al. (2015)](https://arxiv.org/abs/1503.02531). <a href="https://huggingface.co/exbert/?model=distilgpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
mradermacher/HeroBophades-3x7B-GGUF
mradermacher
2024-05-06T05:10:15Z
411
0
transformers
[ "transformers", "gguf", "en", "dataset:jondurbin/truthy-dpo-v0.1", "dataset:kyujinpy/orca_math_dpo", "dataset:jondurbin/gutenberg-dpo-v0.1", "base_model:nbeerbower/HeroBophades-3x7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-07T12:06:59Z
--- base_model: nbeerbower/HeroBophades-3x7B datasets: - jondurbin/truthy-dpo-v0.1 - kyujinpy/orca_math_dpo - jondurbin/gutenberg-dpo-v0.1 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/nbeerbower/HeroBophades-3x7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q2_K.gguf) | Q2_K | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ3_XS.gguf) | IQ3_XS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q3_K_S.gguf) | Q3_K_S | 8.1 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ3_S.gguf) | IQ3_S | 8.1 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ3_M.gguf) | IQ3_M | 8.3 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q3_K_M.gguf) | Q3_K_M | 9.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q3_K_L.gguf) | Q3_K_L | 9.7 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.IQ4_XS.gguf) | IQ4_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q4_K_S.gguf) | Q4_K_S | 10.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q4_K_M.gguf) | Q4_K_M | 11.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q5_K_S.gguf) | Q5_K_S | 12.8 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q5_K_M.gguf) | Q5_K_M | 13.2 | | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q6_K.gguf) | Q6_K | 15.3 | very good quality | | [GGUF](https://huggingface.co/mradermacher/HeroBophades-3x7B-GGUF/resolve/main/HeroBophades-3x7B.Q8_0.gguf) | Q8_0 | 19.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Sao10K/Franziska-Maxtral-8x22B-v1
Sao10K
2024-04-18T12:07:43Z
411
1
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "en", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-04-17T19:56:49Z
--- license: cc-by-nc-4.0 language: - en --- GGUF: https://huggingface.co/Sao10K/Franziska-Maxtral-8x22B-v1-GGUF An experiment. Maxtral model lora-finetune of my own + merged with i think it was Tess at a low weight, slerp or something together. the raw adapter was slopped despite trained on base so thats why I added Tess, it helped somewhat. So it is a merge yes, at the same time part of the model is mine. kinda meh with it it, but leaving it out here. loves to yap, slight positivity bias and gpt-isms. kinda expected. its not special or unique, just another model out there. use alpaca or vicuna or \[INST] blocks or whatever idc.
Akirami/truthy-llama3-8b
Akirami
2024-04-29T15:53:18Z
411
1
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "bitsandbytes", "region:us" ]
text-generation
2024-04-24T07:07:01Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/Myrrh_solar_10.7b_3.0-GGUF
mradermacher
2024-05-06T04:33:00Z
411
0
transformers
[ "transformers", "gguf", "ko", "base_model:MoaData/Myrrh_solar_10.7b_3.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T09:40:57Z
--- base_model: MoaData/Myrrh_solar_10.7b_3.0 language: - ko library_name: transformers license: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/MoaData/Myrrh_solar_10.7b_3.0 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ3_XS.gguf) | IQ3_XS | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ3_S.gguf) | IQ3_S | 4.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ3_M.gguf) | IQ3_M | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Myrrh_solar_10.7b_3.0-GGUF/resolve/main/Myrrh_solar_10.7b_3.0.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_2
ShenaoZ
2024-05-05T11:12:40Z
411
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-05-05T10:10:03Z
--- license: mit base_model: ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_1 tags: - alignment-handbook - generated_from_trainer - trl - dpo - generated_from_trainer datasets: - updated - original model-index: - name: 0.0005_withdpo_4iters_bs256_555lr_iter_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0005_withdpo_4iters_bs256_555lr_iter_2 This model is a fine-tuned version of [ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_1](https://huggingface.co/ShenaoZ/0.0005_withdpo_4iters_bs256_555lr_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
mradermacher/YugoHermes-7B-GGUF
mradermacher
2024-05-20T23:12:53Z
411
0
transformers
[ "transformers", "gguf", "en", "base_model:Stopwolf/YugoHermes-7B", "endpoints_compatible", "region:us" ]
null
2024-05-20T22:25:31Z
--- base_model: Stopwolf/YugoHermes-7B language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Stopwolf/YugoHermes-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/YugoHermes-7B-GGUF/resolve/main/YugoHermes-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
ayjays132/CustomImageGenerator
ayjays132
2024-06-26T04:51:31Z
411
1
diffusers
[ "diffusers", "pytorch", "safetensors", "gpt2", "image-generation", "generative-model", "multimodal", "SOTA", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-05-23T00:36:58Z
--- tags: - image-generation - generative-model - multimodal - SOTA model_name: CustomImageGenerator model_type: image-generation description: > CustomImageGenerator is a state-of-the-art multimodal generative model based on the GPT-2 architecture, capable of generating high-quality images from textual prompts. The model combines advanced techniques from natural language processing (NLP) and computer vision to produce visually coherent and contextually relevant images. architecture: GPT-2 tasks: - image-generation references: - title: Generative Pre-trained Transformer 2.0 url: > https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf - title: Learning to Generate Images from Text url: https://arxiv.org/abs/1511.02793 - title: Stable Diffusion Models for Image Generation url: https://arxiv.org/abs/2105.05233 related_models: - name: BigGAN description: State-of-the-art generative adversarial network (GAN) for image generation. url: https://github.com/ajbrock/BigGAN-PyTorch - name: CLIP description: > Contrastive Language-Image Pre-training model for understanding images and text. url: https://github.com/openai/CLIP language: - en license: apache-2.0 --- <style> /* General Styles */ @import url('https://fonts.googleapis.com/css2?family=Montserrat:wght@400;600;800&display=swap'); body { font-family: 'Montserrat', sans-serif; background-color: #121212; margin: 0; padding: 20px; line-height: 1.6; color: #e0e0e0; display: flex; flex-direction: column; align-items: center; justify-content: center; min-height: 100vh; border-radius: 10px; background: rgba(255, 255, 255, 0.05); } .container { max-width: 900px; margin: 20px auto; padding: 40px; background-color: #1e1e1e; border-radius: 20px; box-shadow: 0 20px 40px rgba(0, 0, 0, 0.8); overflow: hidden; animation: fadeIn 1s ease-in-out; border: 2px solid #333; } @keyframes fadeIn { 0% { opacity: 0; } 100% { opacity: 1; } } .section { margin-bottom: 60px; padding: 20px; border-radius: 10px; background: rgba(255, 255, 255, 0.05); transition: background 0.3s ease, transform 0.3s ease; } .section:hover { background: rgba(255, 255, 255, 0.1); transform: translateY(-5px); } .section-header { text-align: center; margin-bottom: 40px; animation: slideIn 1s ease-in-out; border-bottom: 2px solid #333; padding-bottom: 10px; position: relative; } @keyframes slideIn { 0% { transform: translateX(-100%); opacity: 0; } 100% { transform: translateX(0); opacity: 1; } } .section-title { font-size: 36px; font-weight: 800; margin-bottom: 20px; text-transform: uppercase; letter-spacing: 2px; color: #e0e0e0; animation: fadeIn 1s ease-in-out; text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.6); } .section-description { font-size: 18px; line-height: 1.8; color: #b0b0b0; animation: fadeIn 1s ease-in-out; text-shadow: 1px 1px 3px rgba(0, 0, 0, 0.5); } .section-content { display: grid; grid-template-columns: repeat(auto-fill, minmax(250px, 1fr)); gap: 20px; } .grid-item { position: relative; overflow: hidden; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.15); transition: transform 0.3s ease, box-shadow 0.3s ease; } .grid-item img { width: 100%; height: 100%; display: block; object-fit: cover; transition: transform 0.5s ease; } .grid-item:hover { transform: translateY(-10px); box-shadow: 0 8px 16px rgba(0, 0, 0, 0.2); } .grid-item:hover img { transform: scale(1.1); } .dot-container { display: flex; justify-content: center; align-items: center; margin-top: 20px; } .dot { height: 15px; width: 15px; margin: 0 8px; background-color: #bbb; border-radius: 50%; cursor: pointer; transition: background-color 0.3s ease, transform 0.3s ease; } .dot.active, .dot:hover { background-color: #007bff; transform: scale(1.2); } .detail { display: flex; align-items: center; margin-bottom: 20px; color: #e0e0e0; animation: fadeIn 1s ease-in-out; padding: 10px; border-radius: 8px; transition: background 0.3s ease, transform 0.3s ease; } .detail:hover { background: rgba(255, 255, 255, 0.1); transform: translateY(-5px); } .detail-icon { margin-right: 12px; font-size: 24px; color: #007bff; } .detail-text { font-size: 18px; color: #e0e0e0; } .interactive-element { position: relative; width: 100%; height: 300px; border-radius: 20px; overflow: hidden; background: linear-gradient(135deg, #1e1e1e, #121212); box-shadow: inset 0 0 10px rgba(0, 0, 0, 0.5); transition: transform 0.3s ease; } .interactive-element::before, .interactive-element::after { content: ''; position: absolute; width: 100%; height: 100%; background: linear-gradient(135deg, rgba(255, 0, 0, 0.5), rgba(0, 0, 255, 0.5)); mix-blend-mode: screen; animation: shimmer 5s infinite; } .interactive-element::before { top: -100%; left: 0; animation-direction: alternate; } .interactive-element::after { bottom: -100%; right: 0; animation-direction: alternate-reverse; } @keyframes shimmer { 0% { transform: translateY(0); } 100% { transform: translateY(100%); } } .interactive-message { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); color: #e0e0e0; font-size: 24px; font-weight: 600; text-align: center; opacity: 0; transition: opacity 0.5s ease-in-out; } .interactive-element:hover .interactive-message { opacity: 1; } .form-container { margin-top: 40px; padding: 20px; border-radius: 10px; background: rgba(255, 255, 255, 0.05); box-shadow: 0 10px 20px rgba(0, 0, 0, 0.5); animation: fadeIn 1s ease-in-out; position: relative; overflow: hidden; } .form-container::before { content: ''; position: absolute; top: -50%; left: -50%; width: 200%; height: 200%; background: radial-gradient(circle, rgba(255, 255, 255, 0.1), transparent); animation: rotate 10s infinite linear; } @keyframes rotate { 0% { transform: rotate(0deg); } 100% { transform: rotate(360deg); } } .form-title { font-size: 28px; font-weight: 700; margin-bottom: 20px; text-align: center; color: #e0e0e0; text-shadow: 1px 1px 3px rgba(0, 0, 0, 0.5); } .form-field { margin-bottom: 20px; } .form-label { display: block; font-size: 16px; margin-bottom: 5px; color: #b0b0b0; text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5); } .form-input { width: 100%; padding: 10px; border-radius: 5px; border: 1px solid #333; background: #1e1e1e; color: #e0e0e0; font-size: 16px; transition: border-color 0.3s ease, box-shadow 0.3s ease; } .form-input:focus { outline: none; border-color: #007bff; box-shadow: 0 0 5px rgba(0, 123, 255, 0.5); } .form-button { display: block; width: 100%; padding: 10px; border-radius: 5px; background: #007bff; color: #e0e0e0; font-size: 18px; font-weight: 600; text-align: center; border: none; cursor: pointer; transition: background-color 0.3s ease, transform 0.3s ease; } .form-button:hover { background: #0056b3; transform: translateY(-2px); } .form-button:active { background: #003a75; } .widget-container { background: rgba(255, 255, 255, 0.05); border-radius: 10px; padding: 20px; margin-top: 40px; animation: fadeIn 1s ease-in-out; position: relative; overflow: hidden; } .widget-container::before { content: ''; position: absolute; top: -50%; left: -50%; width: 200%; height: 200%; background: radial-gradient(circle, rgba(255, 255, 255, 0.1), transparent); animation: rotate 10s infinite linear; } .widget-header { text-align: center; font-size: 24px; font-weight: 700; color: #e0e0e0; margin-bottom: 20px; text-shadow: 1px 1px 3px rgba(0, 0, 0, 0.5); } .widget-content { display: flex; flex-direction: column; align-items: center; justify-content: center; text-align: center; color: #b0b0b0; } .widget-content p { margin: 10px 0; } .trendy-feature { background-color: #ffffff; padding: 40px; border-radius: 20px; box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); cursor: pointer; transition: transform 0.3s ease; margin: 20px auto; max-width: 600px; } .trendy-feature:hover { transform: translateY(-5px); } .trendy-feature h1 { font-size: 36px; margin-bottom: 20px; color: #333; } </style> <div class="container"> <div class="section"> <div class="section-header"> <h2 class="section-title">🎨 Use Cases</h2> </div> <div class="section-content"> <div class="section-description"> <p><strong>🖼️ Artistic Content Generation</strong></p> <p>CustomImageGenerator serves as a virtual canvas for artists and designers, enabling the creation of captivating artworks from mere text. Whether it's envisioning mythical landscapes or crafting futuristic cityscapes, the model ignites creativity and opens doors to boundless artistic exploration.</p> </div> </div> </div> <div class="section"> <div class="section-header"> <h2 class="section-title">ℹ️ Model Details</h2> </div> <div class="section-content"> <div class="section-description"> <p><strong>🧠 Architecture</strong></p> <p>CustomImageGenerator is built upon the GPT-2 architecture, a powerful transformer-based model renowned for its natural language processing capabilities. Leveraging GPT-2's architecture, the model seamlessly integrates text and image generation, offering a holistic approach to multimodal AI.</p> </div> </div> </div> <div class="section"> <div class="section-header"> <h2 class="section-title">🌟 Significance</h2> </div> <div class="section-content"> <div class="section-description"> <p>CustomImageGenerator represents a paradigm shift in multimodal AI, bridging the gap between language and vision to enable seamless communication and creativity. Its ability to generate contextually relevant images from textual prompts opens up new possibilities for artistic expression, conceptualization, and product design, ushering in a new era of human-machine collaboration and innovation.</p> </div> </div> </div> </div> <div class="container"> <div class="section"> <div class="section-header"> <h2 class="section-title">🖼️ Example Images</h2> </div> <div class="section-content"> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Ancient%20Temple%20in%20a%20Lush%20Jungle.jpg" alt="Ancient Temple in a Lush Jungle"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Cyberpunk%20Metropolis%20Scene.jpg" alt="Cyberpunk Metropolis Scene"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Extraterrestrial%20Market%20on%20a%20Desert%20Planet.jpg" alt="Extraterrestrial Market on a Desert Planet"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Generate%20an%20image%20depicting%20a%20sprawling%20fantasy%20city%20at%20dusk.jpg" alt="Sprawling Fantasy City at Dusk"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Post-Apocalyptic%20Landscape.jpg" alt="Post-Apocalyptic Landscape"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Space%20Station%20Orbiting%20a%20Distant%20Planet.jpg" alt="Space Station Orbiting a Distant Planet"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Steampunk%20Adventure%20Setting.jpg" alt="Steampunk Adventure Setting"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Steampunk%20Airship%20Fleet%20Over%20a%20Cloudy%20Sky.jpg" alt="Steampunk Airship Fleet Over a Cloudy Sky"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/Underwater%20City%20of%20the%20Future.jpg" alt="Underwater City of the Future"> </div> <div class="grid-item"> <img src="https://huggingface.co/ayjays132/CustomImageGenerator/resolve/main/ImageFiles/rudolph%20with%20red%20nose%20in%20cartoon%208k%20very%20detailed.jpg" alt="Rudolph with Red Nose in Cartoon 8k Very Detailed"> </div> </div> </div> </div> <script> let currentIndex = 0; let startX = 0; let endX = 0; document.addEventListener("DOMContentLoaded", function() { const slides = document.querySelectorAll('.carousel-item'); const dots = document.querySelectorAll('.dot'); function showSlide(index) { if (index >= slides.length) { currentIndex = 0; } else if (index < 0) { currentIndex = slides.length - 1; } else { currentIndex = index; } slides.forEach((slide, i) => { slide.classList.toggle('active', i === currentIndex); }); updateDots(); } function nextSlide() { showSlide(currentIndex + 1); } function prevSlide() { showSlide(currentIndex - 1); } function updateDots() { dots.forEach((dot, i) => { dot.classList.toggle('active', i === currentIndex); }); } function handleTouchStart(event) { startX = event.touches[0].clientX; } function handleTouchMove(event) { endX = event.touches[0].clientX; } function handleTouchEnd() { if (startX > endX + 50) { nextSlide(); } else if (startX < endX - 50) { prevSlide(); } } document.querySelector('.carousel-inner').addEventListener('touchstart', handleTouchStart, false); document.querySelector('.carousel-inner').addEventListener('touchmove', handleTouchMove, false); document.querySelector('.carousel-inner').addEventListener('touchend', handleTouchEnd, false); dots.forEach((dot, index) => { dot.addEventListener("click", () => { showSlide(index); }); }); showSlide(currentIndex); }); </script>
RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf
RichardErkhov
2024-05-30T16:05:04Z
411
0
null
[ "gguf", "region:us" ]
null
2024-05-30T12:23:00Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) OpenOrca-Zephyr-7B - GGUF - Model creator: https://huggingface.co/Weyaxi/ - Original model: https://huggingface.co/Weyaxi/OpenOrca-Zephyr-7B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [OpenOrca-Zephyr-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q2_K.gguf) | Q2_K | 2.53GB | | [OpenOrca-Zephyr-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [OpenOrca-Zephyr-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.IQ3_S.gguf) | IQ3_S | 2.96GB | | [OpenOrca-Zephyr-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [OpenOrca-Zephyr-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.IQ3_M.gguf) | IQ3_M | 3.06GB | | [OpenOrca-Zephyr-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q3_K.gguf) | Q3_K | 3.28GB | | [OpenOrca-Zephyr-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [OpenOrca-Zephyr-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [OpenOrca-Zephyr-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [OpenOrca-Zephyr-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q4_0.gguf) | Q4_0 | 3.83GB | | [OpenOrca-Zephyr-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [OpenOrca-Zephyr-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [OpenOrca-Zephyr-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q4_K.gguf) | Q4_K | 4.07GB | | [OpenOrca-Zephyr-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [OpenOrca-Zephyr-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q4_1.gguf) | Q4_1 | 4.24GB | | [OpenOrca-Zephyr-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q5_0.gguf) | Q5_0 | 4.65GB | | [OpenOrca-Zephyr-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [OpenOrca-Zephyr-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q5_K.gguf) | Q5_K | 4.78GB | | [OpenOrca-Zephyr-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [OpenOrca-Zephyr-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q5_1.gguf) | Q5_1 | 5.07GB | | [OpenOrca-Zephyr-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q6_K.gguf) | Q6_K | 5.53GB | | [OpenOrca-Zephyr-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Weyaxi_-_OpenOrca-Zephyr-7B-gguf/blob/main/OpenOrca-Zephyr-7B.Q8_0.gguf) | Q8_0 | 7.17GB | Original model description: --- license: cc-by-nc-4.0 --- <a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a> Merge of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) and [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) using ties merge. ### *Weights* - [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.3 ### *Density* - [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha): 0.5 - [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca): 0.5 # Evulation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | | | ARC (25-shot) | | | HellaSwag (10-shot) | | | MMLU (5-shot) | | | TruthfulQA (0-shot) | |
rhaymison/phi-3-portuguese-tom-cat-4k-instruct-q8-gguf
rhaymison
2024-06-03T10:03:59Z
411
0
transformers
[ "transformers", "gguf", "portugues", "portuguese", "QA", "instruct", "phi", "q8", "text-generation", "pt", "dataset:rhaymison/superset", "base_model:microsoft/Phi-3-mini-4k-instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-06-03T09:57:14Z
--- language: - pt license: apache-2.0 library_name: transformers tags: - portugues - portuguese - QA - instruct - phi - gguf - q8 base_model: microsoft/Phi-3-mini-4k-instruct datasets: - rhaymison/superset pipeline_tag: text-generation --- # Phi3 portuguese tom cat 4k instruct GGUF <p align="center"> <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/> </p> This GGUF model, derived from the Phi3 Tom cat 4k, has been quantized in q8. The model was trained with a superset of 300,000 instructions in Portuguese, aiming to help fill the gap in models available in Portuguese. Tuned from Phi3-4k, this model has been primarily adjusted for instructional tasks. This model was trained with a superset of 300,000 instructions in Portuguese. The model comes to help fill the gap in models in Portuguese. Tuned from the microsoft/Phi-3-mini-4k. Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. Important points like these help models (even smaller models like 4b) to perform much better. ```python !git lfs install !pip install langchain !pip install langchain-community langchain-core !pip install llama-cpp-python !git clone https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct-q8-gguf def llamacpp(): from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain llm = LlamaCpp( model_path="/content/phi-3-portuguese-tom-cat-4k-instruct-q8-gguf", n_gpu_layers=40, n_batch=512, verbose=True, ) template = """<s>[INST] Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido. ### {question} [/INST]""" prompt = PromptTemplate(template=template, input_variables=["question"]) llm_chain = LLMChain(prompt=prompt, llm=llm) question = "instrução: aja como um professor de matemática e me explique porque 2 + 2 = 4?" response = llm_chain.run({"question": question}) print(response) ``` ### Comments Any idea, help or report will always be welcome. email: [email protected] <div style="display:flex; flex-direction:row; justify-content:left"> <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank"> <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white"> </a> <a href="https://github.com/rhaymisonbetini" target="_blank"> <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white"> </a>
adalbertojunior/NeuralDaredevil-Dolphin-Portuguese
adalbertojunior
2024-06-05T20:20:16Z
411
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "pt", "dataset:adalbertojunior/dolphin_portuguese_legal", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-03T17:04:52Z
--- library_name: transformers datasets: - adalbertojunior/dolphin_portuguese_legal language: - pt --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LyliaEngine/autismmixSDXL_autismmixConfetti
LyliaEngine
2024-06-06T01:57:50Z
411
1
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:LyliaEngine/Pony_Diffusion_V6_XL", "license:cdla-permissive-2.0", "region:us" ]
text-to-image
2024-06-06T01:28:34Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: >- yoko littner, by kaisen chuui, tsukiyo \(skymint\), by masami chie, tengen toppa gurren lagann, solo, 1girl, upper body, BREAK (head back:1.1), deepthroat, neck bulge, throat bulge, double dildo, holding sex toy, arms up, tears, drooling, saliva, tongue out, BREAK pointy breasts, perky breasts, see-through silhouette, tanlines, black bikini, scarf, ponytail, BREAK outdoors, backlighting, uncensored, rating_explicit, source_anime, score_9, score_8_up parameters: negative_prompt: bar_censor, mosaic_censoring, depth of field output: url: images/141236177.jpeg base_model: LyliaEngine/Pony_Diffusion_V6_XL instance_prompt: None license: cdla-permissive-2.0 --- # autismmixSDXL_autismmixConfetti <Gallery /> ## Model description Mix of pony with some stuff. It&#39;s an attempt at making pony more predictable and less dependent on schizo negatives without removing its comprehension and artist knowledge. Personally I&#39;m using AutismMix_confetti for general use and AutismMix_pony for certain loras. If you want to train a lora on top of autism I recommend doing so in the AutismMix_pony version for better compatibility. The Lightning versions require specific settings to work, read the &quot;about model&quot; information under download. What is the difference between the models: AutismMix_confetti is a small amount of AnimeConfettiTune and AutismMix_pony. Has less style swing than pony and better hands. I prefer this one. AutismMix_pony is a merge of ponyv6 with loras, its more compatible with certain styles made for the base ponydiffusion model. AutismMix_DPO is AutismMix_confetti+DPO lora, made by request. Very similar to confetti version. Add 3d to negs if you want a more traditional anime style. Quality tags should be same as ponyv6, but feel free to experiment: &quot;score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_anime, BREAK&quot; From my testing schizo negatives and those negative embeds made for SDXL&#x2F;pony make it worse, but do whatever you want. If you have any issues running this model I suggest using this webui: https:&#x2F;&#x2F;github.com&#x2F;lllyasviel&#x2F;stable-diffusion-webui-forge As well as this extension if you get noise outputs: https:&#x2F;&#x2F;github.com&#x2F;hako-mikan&#x2F;sd-webui-prevent-artifact ## Source https://civitai.com/models/288584?modelVersionId=324524 ## Credit https://civitai.com/user/Autismix_anon ## Trigger words You should use `None` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/LyliaEngine/autismmixSDXL_autismmixConfetti/tree/main) them in the Files & versions tab.
alvdansen/japanese-poster-prints
alvdansen
2024-06-16T16:26:48Z
411
8
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:creativeml-openrail-m", "region:us" ]
text-to-image
2024-06-08T19:05:15Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: bubble tea, minimalist, daiton style, block print parameters: negative_prompt: signature, watermark, kanji output: url: images/ComfyUI_00461_.png - text: a bird, daiton style output: url: images/ComfyUI_00458_.png - text: a swallow, daiton style output: url: images/ComfyUI_00455_.png - text: a toad output: url: images/ComfyUI_00454_.png - text: a happy kid, minimalist, daiton style, block print parameters: negative_prompt: signature, watermark, kanji output: url: images/ComfyUI_00464_.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: daiton style, block print license: creativeml-openrail-m --- # Japenese Poster Prints <Gallery /> ## Model description This model does a mid century Japanese block print style. If you have a more complex subject, you can use the trigger &quot;daiton&quot; to make the style stronger. If you are seeing unusual characters or text, just use neg prompt: signature, watermark, kanji ## Trigger words You should use `daiton style` to trigger the image generation. You should use `block print` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Model release is for research purposes only. For commercial use, please contact me directly. [Download](/alvdansen/japanese-poster-prints/tree/main) them in the Files & versions tab.
gelalalla/Dicoding-GGUF
gelalalla
2024-06-11T17:21:37Z
411
0
null
[ "gguf", "region:us" ]
null
2024-06-11T17:18:43Z
Entry not found
V3N0M/Jenna-v01-gguf-8b
V3N0M
2024-06-14T17:41:53Z
411
0
transformers
[ "transformers", "gguf", "mistral", "text-generation-inference", "unsloth", "en", "base_model:unsloth/phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-06-14T17:38:37Z
--- language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - gguf base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit --- # Uploaded model - **Developed by:** V3N0M - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Ali-C137/L3H10M-0000
Ali-C137
2024-06-21T12:41:20Z
411
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-17T23:18:24Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Ali-C137/Q2AW1M-1100
Ali-C137
2024-06-21T20:41:52Z
411
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-06-21T13:48:05Z
--- library_name: transformers license: apache-2.0 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
John6666/t-ponynai3-v55-sdxl-spo
John6666
2024-06-23T23:29:19Z
411
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "pony", "SPO", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-06-23T23:24:24Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - pony - SPO --- Original model is [here](https://civitai.com/models/317902/t-ponynai3?modelVersionId=593760).
thomsatieyi/qwen2-7b-skyrim-worldlore
thomsatieyi
2024-06-30T04:55:16Z
411
0
null
[ "gguf", "license:mit", "region:us" ]
null
2024-06-30T04:36:06Z
--- license: mit ---
Sashkanik13/sd1.5-text2img-gguf
Sashkanik13
2024-07-02T06:22:09Z
411
0
null
[ "gguf", "region:us" ]
null
2024-07-02T06:21:29Z
Entry not found
Helsinki-NLP/opus-mt-ca-es
Helsinki-NLP
2023-08-16T11:26:40Z
410
1
transformers
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "ca", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
2022-03-02T23:29:04Z
--- tags: - translation license: apache-2.0 --- ### opus-mt-ca-es * source languages: ca * target languages: es * OPUS readme: [ca-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ca-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ca-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ca.es | 74.9 | 0.863 |
TryStar/CloneDiffusion
TryStar
2023-05-10T19:28:23Z
410
64
diffusers
[ "diffusers", "stable-diffusion", "text-to-image", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2022-11-21T19:40:36Z
--- license: creativeml-openrail-m tags: - stable-diffusion - text-to-image --- This is the fine-tuned Stable Diffusion model trained on screenshots from The Clone wars TV series. Use the tokens "Clonewars style" in your prompts for the effect. **If you enjoy my work, please consider supporting me:** [![Buy me a coffee](https://badgen.net/badge/buy/Coffee/F96854)](https://ko-fi.com/trystar) ## Gradio We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run CloneDiffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/akhaliq/CloneDiffusion) **Star Wars Characters** ![Star Wars Characters](https://huggingface.co/TryStar/CloneDiffusion/resolve/main/Starwars.jpg) **How to use?** Use prompt "clonewars style" before your full prompt. I recommend Steps: 50, Sampler: Euler a and CFG scale: 7 This model was trained using the diffusers based dreambooth training by [TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) created by TryStar
josu/gpt-neo-pt-1.3B
josu
2023-04-13T23:32:39Z
410
0
transformers
[ "transformers", "pytorch", "gpt_neo", "text-generation", "pt", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2023-04-11T21:16:27Z
--- language: - pt widget: - text: A religião - text: O Brasil conquistou - text: França e Inglaterra --- ``` python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("josu/gpt-neo-pt-1.3B") tokenizer = AutoTokenizer.from_pretrained("josu/gpt-neo-pt-1.3B") from transformers import pipeline generator = pipeline('text-generation', model='josu/gpt-neo-pt-1.3B') ```
kwagh20ite/FLICKR_comp_v_6000
kwagh20ite
2023-08-30T03:08:15Z
410
0
diffusers
[ "diffusers", "safetensors", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-08-30T03:04:59Z
Entry not found
mmnga/rinna-bilingual-gpt-neox-4b-instruction-ppo-gguf
mmnga
2023-09-08T02:38:09Z
410
0
null
[ "gguf", "ja", "en", "license:mit", "region:us" ]
null
2023-09-02T17:29:12Z
--- license: mit language: - ja - en --- # rinna/bilingual-gpt-neox-4b-instruction-ppo [rinnaさんが公開しているbilingual-gpt-neox-4b-instruction-ppo](https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-ppo)のgguf変換版です。 他モデルはこちら [mmnga/rinna-bilingual-gpt-neox-4b-gguf](https://huggingface.co/mmnga/rinna-bilingual-gpt-neox-4b-gguf) [mmnga/rinna-bilingual-gpt-neox-4b-8k-gguf](https://huggingface.co/mmnga/rinna-bilingual-gpt-neox-4b-8k-gguf) [mmnga/rinna-bilingual-gpt-neox-4b-instruction-ppo-gguf](https://huggingface.co/mmnga/rinna-bilingual-gpt-neox-4b-instruction-ppo-gguf) [mmnga/rinna-japanese-gpt-neox-3.6b-gguf](https://huggingface.co/mmnga/rinna-japanese-gpt-neox-3.6b-gguf) [mmnga/rinna-japanese-gpt-neox-3.6b-instruction-ppo-gguf](https://huggingface.co/mmnga/rinna-japanese-gpt-neox-3.6b-instruction-ppo-gguf) *注意:こちらはブランチで試用になります。llama.cpp本家にgptneoxが実装された時に、このggufファイルが使用できない可能性があります。* ***[GitHubリポジトリの readme はこちら](https://github.com/mmnga/llama.cpp/tree/mmnga-dev)*** ## Usage (試用) ~~~~bash git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git cd llama.cpp make -j ./main -m 'rinna-bilingual-gpt-neox-4b-instruction-ppo-q4_0.gguf' -n 128 -p 'ユーザー: 吾輩って猫ですか? システム: ' --top_p 0.9 --temp 0.7 --repeat-penalty 1.1 ~~~~ **CUBLAS** ~~~~bash LLAMA_CUBLAS=1 make -j ./main -m 'rinna-bilingual-gpt-neox-4b-instruction-ppo-q4_0.gguf' -n 128 -p 'ユーザー: 吾輩って猫ですか? システム: ' -ngl 32 ~~~~ **従来のCPU実行** ~~~~bash git clone --branch mmnga-dev https://github.com/mmnga/llama.cpp.git cd llama.cpp make -j gptneox ./gptneox -m 'rinna-bilingual-gpt-neox-4b-instruction-ppo-q4_0.gguf' -n 128 -p 'ユーザー: 吾輩って猫ですか? システム: ' ~~~~
TheBloke/SuperPlatty-30B-GGUF
TheBloke
2023-09-27T12:52:45Z
410
0
transformers
[ "transformers", "gguf", "llama", "en", "base_model:ariellee/SuperPlatty-30B", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-19T23:33:40Z
--- language: - en license: other tags: - llama metrics: - MMLU - ARC - HellaSwag - TruthfulQA model_name: SuperPlatty 30B base_model: ariellee/SuperPlatty-30B inference: false model_creator: Ariel Lee model_type: llama prompt_template: 'Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # SuperPlatty 30B - GGUF - Model creator: [Ariel Lee](https://huggingface.co/ariellee) - Original model: [SuperPlatty 30B](https://huggingface.co/ariellee/SuperPlatty-30B) <!-- description start --> ## Description This repo contains GGUF format model files for [Ariel Lee's SuperPlatty 30B](https://huggingface.co/ariellee/SuperPlatty-30B). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/SuperPlatty-30B-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/SuperPlatty-30B-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF) * [Ariel Lee's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ariellee/SuperPlatty-30B) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: Alpaca ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {prompt} ### Response: ``` <!-- prompt-template end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [superplatty-30b.Q2_K.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q2_K.gguf) | Q2_K | 2 | 13.50 GB| 16.00 GB | smallest, significant quality loss - not recommended for most purposes | | [superplatty-30b.Q3_K_S.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q3_K_S.gguf) | Q3_K_S | 3 | 14.06 GB| 16.56 GB | very small, high quality loss | | [superplatty-30b.Q3_K_M.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q3_K_M.gguf) | Q3_K_M | 3 | 15.76 GB| 18.26 GB | very small, high quality loss | | [superplatty-30b.Q3_K_L.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q3_K_L.gguf) | Q3_K_L | 3 | 17.28 GB| 19.78 GB | small, substantial quality loss | | [superplatty-30b.Q4_0.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q4_0.gguf) | Q4_0 | 4 | 18.36 GB| 20.86 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [superplatty-30b.Q4_K_S.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q4_K_S.gguf) | Q4_K_S | 4 | 18.44 GB| 20.94 GB | small, greater quality loss | | [superplatty-30b.Q4_K_M.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q4_K_M.gguf) | Q4_K_M | 4 | 19.62 GB| 22.12 GB | medium, balanced quality - recommended | | [superplatty-30b.Q5_0.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q5_0.gguf) | Q5_0 | 5 | 22.40 GB| 24.90 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [superplatty-30b.Q5_K_S.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q5_K_S.gguf) | Q5_K_S | 5 | 22.40 GB| 24.90 GB | large, low quality loss - recommended | | [superplatty-30b.Q5_K_M.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q5_K_M.gguf) | Q5_K_M | 5 | 23.05 GB| 25.55 GB | large, very low quality loss - recommended | | [superplatty-30b.Q6_K.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q6_K.gguf) | Q6_K | 6 | 26.69 GB| 29.19 GB | very large, extremely low quality loss | | [superplatty-30b.Q8_0.gguf](https://huggingface.co/TheBloke/SuperPlatty-30B-GGUF/blob/main/superplatty-30b.Q8_0.gguf) | Q8_0 | 8 | 34.57 GB| 37.07 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/SuperPlatty-30B-GGUF and below it, a specific filename to download, such as: superplatty-30b.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/SuperPlatty-30B-GGUF superplatty-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/SuperPlatty-30B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/SuperPlatty-30B-GGUF superplatty-30b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m superplatty-30b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/SuperPlatty-30B-GGUF", model_file="superplatty-30b.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Ariel Lee's SuperPlatty 30B # Information SuperPlatty-30B is a merge of [garage-bAInd/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [kaiokendev/SuperCOT-LoRA](https://huggingface.co/kaiokendev/SuperCOT-LoRA) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 62.6 | | ARC (25-shot) | 66.1 | | HellaSwag (10-shot) | 83.9 | | TruthfulQA (0-shot) | 54.0 | | Avg. | 66.6 | We use state-of-the-art EleutherAI [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above. ## Model Details * **Trained by**: Platypus-30B trained by Cole Hunter & Ariel Lee; SuperCOT-LoRA trained by kaiokendev. * **Model type:** **SuperPlatty-30B** is an auto-regressive language model based on the LLaMA transformer architecture. * **Language(s)**: English * **License for base weights**: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md). | Hyperparameter | Value | |---------------------------|-------| | \\(n_\text{parameters}\\) | 33B | | \\(d_\text{model}\\) | 6656 | | \\(n_\text{layers}\\) | 60 | | \\(n_\text{heads}\\) | 52 | ## Reproducing Evaluation Results Install LM Evaluation Harness: ``` git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e . ``` Each task was evaluated on a single A100 80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/SuperPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/SuperPlatty-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/SuperPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/SuperPlatty-30B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/SuperPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/SuperPlatty-30B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/SuperPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/SuperPlatty-30B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly. ## Citations ```bibtex @article{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ``` <!-- original-model-card end -->
TheBloke/PuddleJumper-13B-V2-GGUF
TheBloke
2023-09-27T12:54:18Z
410
3
transformers
[ "transformers", "gguf", "llama", "dataset:totally-not-an-llm/EverythingLM-data-V3", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "base_model:totally-not-an-llm/PuddleJumper-13b-V2", "license:other", "text-generation-inference", "region:us" ]
null
2023-09-23T22:33:40Z
--- license: other datasets: - totally-not-an-llm/EverythingLM-data-V3 - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus model_name: PuddleJumper 13B V2 base_model: totally-not-an-llm/PuddleJumper-13b-V2 inference: false model_creator: Kai Howard model_type: llama prompt_template: 'USER: {prompt} ASSISTANT: ' quantized_by: TheBloke --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content: space-between; width: 100%;"> <div style="display: flex; flex-direction: column; align-items: flex-start;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p> </div> <div style="display: flex; flex-direction: column; align-items: flex-end;"> <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p> </div> </div> <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div> <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> <!-- header end --> # PuddleJumper 13B V2 - GGUF - Model creator: [Kai Howard](https://huggingface.co/totally-not-an-llm) - Original model: [PuddleJumper 13B V2](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b-V2) <!-- description start --> ## Description This repo contains GGUF format model files for [Kai Howard's PuddleJumper 13B V2](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b-V2). <!-- description end --> <!-- README_GGUF.md-about-gguf start --> ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplate list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. <!-- README_GGUF.md-about-gguf end --> <!-- repositories-available start --> ## Repositories available * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-AWQ) * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GPTQ) * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF) * [Kai Howard's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b-V2) <!-- repositories-available end --> <!-- prompt-template start --> ## Prompt template: User-Assistant ``` USER: {prompt} ASSISTANT: ``` <!-- prompt-template end --> <!-- licensing start --> ## Licensing The creator of the source model has listed its license as `other`, and this quantization has therefore used that same license. As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly. In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [Kai Howard's PuddleJumper 13B V2](https://huggingface.co/totally-not-an-llm/PuddleJumper-13b-V2). <!-- licensing end --> <!-- compatibility_gguf start --> ## Compatibility These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) They are also compatible with many third party UIs and libraries - please see the list at the top of this README. ## Explanation of quantisation methods <details> <summary>Click to see details</summary> The new methods available are: * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw) * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw. * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw. * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw Refer to the Provided Files table below to see what files use which methods, and how. </details> <!-- compatibility_gguf end --> <!-- README_GGUF.md-provided-files start --> ## Provided files | Name | Quant method | Bits | Size | Max RAM required | Use case | | ---- | ---- | ---- | ---- | ---- | ----- | | [puddlejumper-13b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes | | [puddlejumper-13b-v2.Q3_K_S.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss | | [puddlejumper-13b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss | | [puddlejumper-13b-v2.Q3_K_L.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss | | [puddlejumper-13b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M | | [puddlejumper-13b-v2.Q4_K_S.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss | | [puddlejumper-13b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended | | [puddlejumper-13b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M | | [puddlejumper-13b-v2.Q5_K_S.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended | | [puddlejumper-13b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended | | [puddlejumper-13b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss | | [puddlejumper-13b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/PuddleJumper-13B-V2-GGUF/blob/main/puddlejumper-13b-v2.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended | **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead. <!-- README_GGUF.md-provided-files end --> <!-- README_GGUF.md-how-to-download start --> ## How to download GGUF files **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: - LM Studio - LoLLMS Web UI - Faraday.dev ### In `text-generation-webui` Under Download Model, you can enter the model repo: TheBloke/PuddleJumper-13B-V2-GGUF and below it, a specific filename to download, such as: puddlejumper-13b-v2.Q4_K_M.gguf. Then click Download. ### On the command line, including multiple files at once I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download TheBloke/PuddleJumper-13B-V2-GGUF puddlejumper-13b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage</summary> You can also download multiple files at once with a pattern: ```shell huggingface-cli download TheBloke/PuddleJumper-13B-V2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/PuddleJumper-13B-V2-GGUF puddlejumper-13b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## Example `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 32 -m puddlejumper-13b-v2.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "USER: {prompt}\nASSISTANT:" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) ## How to run in `text-generation-webui` Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md). ## How to run from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. ### How to load this model in Python code, using ctransformers #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install ctransformers # Or with CUDA GPU acceleration pip install ctransformers[cuda] # Or with AMD ROCm GPU acceleration (Linux only) CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers # Or with Metal GPU acceleration for macOS systems only CT_METAL=1 pip install ctransformers --no-binary ctransformers ``` #### Simple ctransformers example code ```python from ctransformers import AutoModelForCausalLM # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = AutoModelForCausalLM.from_pretrained("TheBloke/PuddleJumper-13B-V2-GGUF", model_file="puddlejumper-13b-v2.Q4_K_M.gguf", model_type="llama", gpu_layers=50) print(llm("AI is going to")) ``` ## How to use with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) <!-- README_GGUF.md-how-to-run end --> <!-- footer start --> <!-- 200823 --> ## Discord For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/theblokeai) ## Thanks, and how to contribute Thanks to the [chirper.ai](https://chirper.ai) team! Thanks to Clay from [gpus.llm-utils.org](llm-utils)! I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training. If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits. * Patreon: https://patreon.com/TheBlokeAI * Ko-Fi: https://ko-fi.com/TheBlokeAI **Special thanks to**: Aemon Algiz. **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov Thank you to all my generous patrons and donaters! And thank you again to a16z for their generous grant. <!-- footer end --> <!-- original-model-card start --> # Original model card: Kai Howard's PuddleJumper 13B V2 Merge of EverythingLM-V3-13b QLoRa and OpenOrca-Platypus2-13B. ### Prompt format: ``` USER: <prompt> ASSISTANT: ``` <!-- original-model-card end -->
newsmediabias/UnBIAS-Named-Entity-Recognition
newsmediabias
2023-10-07T23:40:27Z
410
1
transformers
[ "transformers", "pytorch", "bert", "token-classification", "en", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-10-07T15:15:45Z
--- license: mit language: - en --- # Named entity recognition ## Model Description This model is a fine-tuned token classification model designed to predict entities in sentences. It's fine-tuned on a custom dataset that focuses on identifying certain types of entities, including biases in text. ## Intended Use The model is intended to be used for entity recognition tasks, especially for identifying biases in text passages. Users can input a sequence of text, and the model will highlight words or tokens or **spans** it believes are associated with a particular entity or bias. ## How to Use The model can be used for inference directly through the Hugging Face `transformers` library: ```python from transformers import AutoModelForTokenClassification, AutoTokenizer import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("newsmediabias/UnBIAS-Named-Entity-Recognition") model = AutoModelForTokenClassification.from_pretrained("newsmediabias/UnBIAS-Named-Entity-Recognition") def predict_entities(sentence): tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sentence))) inputs = tokenizer.encode(sentence, return_tensors="pt") inputs = inputs.to(device) outputs = model(inputs).logits predictions = torch.argmax(outputs, dim=2) id2label = model.config.id2label # Reconstruct words from subword tokens biased_words = [] current_word = "" for token, prediction in zip(tokens, predictions[0]): label = id2label[prediction.item()] if label in ['B-BIAS', 'I-BIAS']: if token.startswith('##'): current_word += token[2:] else: if current_word: biased_words.append(current_word) current_word = token if current_word: biased_words.append(current_word) # Filter out special tokens and subword tokens biased_words = [word for word in biased_words if not word.startswith('[') and not word.endswith(']') and not word.startswith('##')] return biased_words sentence = "due to your evil and dishonest nature, i am kind of tired and want to get rid of such cheapters. all people like you are evil and a disgrace to society and I must say to get rid of immigrants as they are filthy to culture" predictions = predict_entities(sentence) biased_words = predict_entities(sentence) for word in biased_words: print(f"Biased Word: {word}") ``` ## Limitations and Biases Every model has limitations, and it's crucial to understand these when deploying models in real-world scenarios: 1. **Training Data**: The model is trained on a specific dataset, and its predictions are only as good as the data it's trained on. 2. **Generalization**: While the model may perform well on certain types of sentences or phrases, it might not generalize well to all types of text or contexts. It's also essential to be aware of any potential biases in the training data, which might affect the model's predictions. ## Training Data The model was fine-tuned on a custom dataset. Ask **Shaina Raza [email protected]** for dataset
mmnga/webbigdata-ALMA-7B-Ja-gguf
mmnga
2023-11-06T13:14:52Z
410
2
null
[ "gguf", "arxiv:2309.11674", "license:other", "region:us" ]
null
2023-10-10T14:55:14Z
--- license: other --- # webbigdata-ALMA-7B-Ja-gguf [webbigdataさんが公開しているALMA-7B-Ja](https://huggingface.co/webbigdata/ALMA-7B-Ja)のggufフォーマット変換版です。 v1のggufの各量子化の評価がwebbigdataさんのblogで公開されています [webbigdata/1.日英・英日機械翻訳モデルALMA-7B-Ja-V2の公開](https://webbigdata.jp/post-21151/) モデル一覧 [mmnga/webbigdata-ALMA-7B-Ja-V2-gguf](https://huggingface.co/mmnga/webbigdata-ALMA-7B-Ja-V2-gguf) [mmnga/webbigdata-ALMA-7B-Ja-gguf](https://huggingface.co/mmnga/webbigdata-ALMA-7B-Ja-gguf) ## Usage ``` git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp make -j ./main -m 'webbigdata-ALMA-7B-Ja-q4_0.gguf' -n 128 -p 'Translate this from Japanese to English:\nJapanese: 今日の夕食はピザです。\nEnglish:' ``` ### 引用 Citations ~~~tex @misc{xu2023paradigm, title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models}, author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla}, year={2023}, eprint={2309.11674}, archivePrefix={arXiv}, primaryClass={cs.CL} } ~~~
Baghdad99/saad-speech-recognition-hausa-audio-to-text
Baghdad99
2023-11-05T14:42:27Z
410
8
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ha", "dataset:mozilla-foundation/common_voice_13_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2023-11-05T11:13:44Z
--- language: - ha license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - mozilla-foundation/common_voice_13_0 metrics: - wer model-index: - name: Hausa Whisper Small - Saad results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 13 type: mozilla-foundation/common_voice_13_0 config: ha split: test args: ha metrics: - name: Wer type: wer value: 44.41266209000763 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Hausa Whisper Small - Saad This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset. It achieves the following results on the evaluation set: - Loss: 0.7524 - Wer Ortho: 47.7050 - Wer: 44.4127 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 50 - training_steps: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:| | 0.0104 | 3.18 | 500 | 0.7524 | 47.7050 | 44.4127 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1
hkust-nlp/deita-7b-v1.0-sft
hkust-nlp
2023-12-29T14:57:59Z
410
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "en", "dataset:hkust-nlp/deita-6k-v0", "arxiv:2312.15685", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2023-12-17T12:54:38Z
--- license: apache-2.0 datasets: - hkust-nlp/deita-6k-v0 language: - en --- <img src="https://huggingface.co/datasets/hkust-nlp/deita-images/resolve/main/logo-final.png" alt="Deita banner" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Deita 7B V1.0 SFT [GitHub](https://github.com/hkust-nlp/deita) | [Paper](https://arxiv.org/abs/2312.15685) Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs). Deita 7B V1.0 SFT is a fine-tuned version of Mistral-7B-v0.1 that was trained on 6k automatically selected lightweight, high-quality alignment SFT data: [Deita 6K V0](https://huggingface.co/datasets/hkust-nlp/deita-6k-v0). ## Model description - **Model type:** Model fine tuned on automatically selected lightweight, high-quality alignment SFT data. - **Language(s) (NLP):** Primarily English - **Finetuned from model:** Mistral-7B-v0.1 ### Model Sources - **Repository:** https://github.com/hkust-nlp/deita - **Model Family:** Other models and the dataset are found in the [Deita collection](https://huggingface.co/collections/hkust-nlp/deita-6569c198c174808d94cf5bd4). ## Performance | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) | |------------------------------------------------|-----------|------------|----------|---------------|----------------| | **Proprietary Models** | | | | | | | GPT-4-Turbo | ? | -- | 9.32 | 97.70 | -- | | GPT-4 | SFT + PPO | -- | 8.99 | 95.03 | -- | | Claude-2 | SFT + PPO | -- | 8.06 | 91.36 | -- | | GPT-3.5-turbo | SFT + PPO | -- | 7.94 | 89.37 | -- | | **Open-sourced Models based on LLaMA-1-13B** | | | | | | | LIMA | SFT | 1K SFT | 4.29 | 41.98 | 59.82 | | WizardLM-13B | SFT | 70K SFT | 6.35 | 75.31 | 58.96 | | Vicuna-13B-v1.3 | SFT | 125K SFT | 6.39 | 82.11 | 60.01 | | Random | SFT | 10K SFT | 6.03 | 71.52 | 60.14 | | DEITA-LLaMA1-13B-v1.0-sft | SFT | 10K SFT | 6.60 | 78.01 | 64.27 | | **Open-sourced Models based on LLaMA-2-13B** | | | | | | | Tulu-2-13B | SFT | 326K SFT | 6.70 | 78.90 | -- | | Tulu-2-13B+DPO | SFT + DPO | 326K SFT + 60K DPO | 7.00 | 89.50 | -- | | LLaMA2-13B-Chat | SFT + PPO | -- | 6.65 | 81.09 | -- | | WizardLM-13B-v1.2 | SFT | >70K SFT | 7.09 | 89.17 | -- | | Vicuna-13B-v1.5 | SFT | 125K SFT | 6.57 | 78.80 | 61.63 | | Random | SFT | 10K SFT | 5.78 | 65.19 | 61.32 | | DEITA-LLaMA2-13B-v1.0-sft | SFT | 10K SFT | 6.79 | 81.09 | 62.71 | | **Open-sourced Models based on Mistral-7B** | | | | | | | Mistral-7B-Instruct-v0.1 | -- | -- | 6.84 | 69.65 | 60.45 | | Zephyr-7B-sft | SFT | 200K SFT | 5.32 | 75.12 | 60.93 | | $\text{Zephyr-7B-}\beta$ | SFT + DPO | 200K SFT + 60K DPO | 7.34 | 90.60 | 66.36 | | OpenChat-3.5 | C-RLFT | >> 70K C-RLFT | 7.81 | 88.51 | -- | | Starling-7B | C-RLFT + APA | >>70K C-RLFT + 183K APA | 8.09 | 91.99 | -- | | Random | SFT | 10K SFT | 5.89 | 56.90 | 61.72 | | DEITA-7B-v1.0-sft (6K) | SFT | 6K SFT | 7.22 | 80.78 | 64.94 | | DEITA-7B-v1.0-sft (10K) | SFT | 10K SFT | 7.32 | 81.67 | 64.00 | | DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 | ## Input Format The model is trained using the [vicuna_v1.1 template](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py) ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hello! ASSISTANT: Hi!</s>USER: How are you? ASSISTANT: ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 128 - total_train_batch_size: 512 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 6.0 ### Framework versions - Transformers 4.34.1 - Pytorch 2.1.0+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1 ## Citation If you find the content of this project helpful, please cite our paper as follows: ``` @misc{liu2023what, title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning}, author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He}, year={2023}, eprint={2312.15685}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
lrds-code/boana-7b-instruct
lrds-code
2024-04-03T01:03:34Z
410
4
transformers
[ "transformers", "safetensors", "llama", "text-generation", "peft", "portuguese", "instruct", "conversational", "pt", "license:llama2", "model-index", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-20T14:06:08Z
--- license: llama2 language: - pt pipeline_tag: text-generation library_name: transformers tags: - llama - peft - portuguese - instruct model-index: - name: boana-7b-instruct results: - task: type: text-generation dataset: type: Muennighoff/xwinograd name: XWinograd (pt) config: pt split: test metrics: - type: Accuracy value: 50.57 --- <hr> # README <hr> # Boana-7B-Instruct <p align="center"> <img width="250" alt="Boana Logo" src="https://cdn-uploads.huggingface.co/production/uploads/658c21f4c1229bf113295773/_Dc6z1IAL-zUiy0LMSSqf.png"> </p> Boana-7B-Instruct é um LLM treinado em dados da língua portuguesa. O modelo é baseado no [LLaMA2-7B](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), uma versão de 7B de parâmetros do LLaMA-2. O projeto Boana tem como objetivo oferecer opções de LLM em língua portuguesa, ao mesmo tempo que disponibiliza um modelo menos complexo para que, dessa forma, usuários com menos poder computacional possam usufruir das LLMs. Em apoio aos países de língua portuguesa. <p align="center"> <img width="250" alt="Countries Logo" src="https://cdn-uploads.huggingface.co/production/uploads/658c21f4c1229bf113295773/d3twZrXng5eDjg_LbH4pF.png"> </p> ### Descrição do Modelo - **Desenvolvido por:** [Leonardo Souza](https://huggingface.co/lrds-code) - **Tipo do modelo:** LLaMA-Based - **Licença:** Academic Free License v3.0 - **Fine-tunado do modelo:** [LLaMA2-7B](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) ## Como Usar ```python import torch from transformers import pipeline boana = pipeline('text-generation', model='lrds-code/boana-7b-instruct', torch_dtype=torch.bfloat16, device_map='auto') messages = [{'role':'system', 'content':''}, {'role':'user', 'content':'Quantos planetas existem no sistema solar?'}] prompt = boana.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = boana(prompt, max_new_tokens=256, do_sample=False, temperature=0, top_k=50, top_p=0.95) print(outputs[0]['generated_text']) # <s>[INST] <<SYS>> # <</SYS>> # Quantos planetas existem no sistema solar? [/INST] O sistema solar consiste em 8 planetas: # 1. Mercurio # 2. Vênus # 3. Terra # 4. Marte # 5. Júpiter # 6. Saturno # 8. Netuno # Além desses planetas, o sistema solar também inclui outros corpos celestes, como asteroides, cometas e anões, bem como várias luas e satélites naturais ``` ## Parâmetros Importantes - **repetition_penalty:** é utilizado para evitar a repetição de palavras ou frases. Quando esse valor é ajustado para ser maior que 1, o modelo tenta diminuir a probabilidade de gerar palavras que já apareceram anteriormente. Basicamente, quanto maior o valor, mais o modelo tenta evitar repetições. - **do_sample:** determina se o modelo deve ou não amostrar aleatoriamente a próxima palavra com base nas probabilidades calculadas. Portanto, **do_sample=True** introduz variação e imprevisibilidade no texto gerado, enquanto que se **do_sample=False** o modelo escolherá sempre a palavra mais provável como próxima palavra, o que pode levar a saídas mais determinísticas e, possivelmente, mais repetitivas. - **temperature:** afeta a aleatoriedade na escolha da próxima palavra. Um valor baixo (próximo de 0) faz com que o modelo seja mais "confiante" nas suas escolhas, favorecendo palavras com alta probabilidade e levando a saídas mais previsíveis. Por outro lado, um valor alto aumenta a aleatoriedade, permitindo que o modelo escolha palavras menos prováveis, o que pode tornar o texto gerado mais variado e criativo.
mradermacher/MistralStoriesGenerator-GGUF
mradermacher
2024-05-06T05:08:23Z
410
0
transformers
[ "transformers", "gguf", "en", "base_model:donutsan/MistralStoriesGenerator", "endpoints_compatible", "region:us" ]
null
2024-04-08T11:56:37Z
--- base_model: donutsan/MistralStoriesGenerator language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/donutsan/MistralStoriesGenerator <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MistralStoriesGenerator-GGUF/resolve/main/MistralStoriesGenerator.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
stvlynn/Qwen-7B-Chat-Cantonese
stvlynn
2024-05-06T09:36:16Z
410
13
transformers
[ "transformers", "safetensors", "qwen", "feature-extraction", "Cantonese", "廣東話", "粤语", "text-generation", "conversational", "custom_code", "zh", "dataset:stvlynn/Cantonese-Dialogue", "base_model:Qwen/Qwen-7B-Chat", "license:agpl-3.0", "region:us" ]
text-generation
2024-05-04T02:11:03Z
--- license: agpl-3.0 datasets: - stvlynn/Cantonese-Dialogue language: - zh pipeline_tag: text-generation tags: - Cantonese - 廣東話 - 粤语 base_model: Qwen/Qwen-7B-Chat --- # Qwen-7B-Chat-Cantonese (通义千问·粤语) ## Intro Qwen-7B-Chat-Cantonese is a fine-tuned version based on Qwen-7B-Chat, trained on a substantial amount of Cantonese language data. Qwen-7B-Chat-Cantonese係基於Qwen-7B-Chat嘅微調版本,基於大量粵語數據進行訓練。 [ModelScope(魔搭社区)](https://www.modelscope.cn/models/stvlynn/Qwen-7B-Chat-Cantonese) ## Usage ### Requirements * python 3.8 and above * pytorch 1.12 and above, 2.0 and above are recommended * CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) ### Dependency To run Qwen-7B-Chat-Cantonese, please make sure you meet the above requirements, and then execute the following pip commands to install the dependent libraries. ```bash pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed ``` In addition, it is recommended to install the `flash-attention` library (**we support flash attention 2 now.**) for higher efficiency and lower memory usage. ```bash git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . ``` ### Quickstart Pls turn to QwenLM/Qwen - [Quickstart](https://github.com/QwenLM/Qwen?tab=readme-ov-file#quickstart) ## Training Parameters | Parameter | Description | Value | |-----------------|----------------------------------------|--------| | Learning Rate | AdamW optimizer learning rate | 7e-5 | | Weight Decay | Regularization strength | 0.8 | | Gamma | Learning rate decay factor | 1.0 | | Batch Size | Number of samples per batch | 1000 | | Precision | Floating point precision | fp16 | | Learning Policy | Learning rate adjustment policy | cosine | | Warmup Steps | Initial steps without learning rate adjustment | 0 | | Total Steps | Total training steps | 1024 | | Gradient Accumulation Steps | Number of steps to accumulate gradients before updating | 8 | ![loss](https://cdn.statically.io/gh/stvlynn/cloudimg@master/blog/2310/image.q9v1ak08ljk.webp) ## Demo ![深水埗有哪些美食](https://cdn.statically.io/gh/stvlynn/cloudimg@master/blog/2310/截屏2024-05-04-11.59.27.2bea6k113e68.webp) ![鲁迅为什么打周树人](https://cdn.statically.io/gh/stvlynn/cloudimg@master/blog/2310/截屏2024-05-04-11.56.46.72tt5czl2gw0.webp) ![树上几只鸟](https://cdn.statically.io/gh/stvlynn/cloudimg@master/blog/2310/截屏2024-05-04-12.00.38.267hvmc3z3c0.webp) ## Special Note This is my first fine-tuning LLM project. Pls forgive me if there's anything wrong. If you have any questions or suggestions, feel free to contact me. [Twitter @stv_lynn](https://x.com/stv_lynn) [Telegram @stvlynn](https://t.me/stvlynn) [email [email protected]](mailto://[email protected])
qwp4w3hyb/openchat-3.6-8b-20240522-iMat-GGUF
qwp4w3hyb
2024-05-29T09:56:55Z
410
0
transformers
[ "transformers", "gguf", "openchat", "llama3", "C-RLFT", "merge", "imatrix", "importance matrix", "text-generation", "arxiv:2309.11235", "base_model:openchat/openchat-3.6-8b-20240522", "license:llama3", "endpoints_compatible", "region:us" ]
text-generation
2024-05-29T09:06:40Z
--- license: llama3 library_name: transformers pipeline_tag: text-generation base_model: openchat/openchat-3.6-8b-20240522 tags: - openchat - llama3 - C-RLFT - merge - gguf - imatrix - importance matrix model-index: - name: openchat-3.6-8b-20240522-iMat-GGUF results: [] --- # Quant Infos - quants done with an importance matrix for improved quantization loss - ggufs & imatrix generated from bf16 for "optimal" accuracy loss - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [477973d2e190815d4e13545370504776433789cf](https://github.com/ggerganov/llama.cpp/commit/477973d2e190815d4e13545370504776433789cf) (master as of 2024-05-22) - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski). ``` ./imatrix -c 512 -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix ``` # Original Model Card <div align="center"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%"> <h1>Advancing Open-source Language Models with Mixed-Quality Data</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://openchat.team"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/imoneoi/openchat"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="https://arxiv.org/pdf/2309.11235.pdf"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/pQjnXvNKHY"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> <p align="center" style="margin-top: 0px;"> <span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span> <img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> </p> <div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center"> <a href="https://huggingface.co/openchat/openchat-3.5-0106" style="text-decoration: none; color: black;"> <span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">Llama 3 Version: OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.6</span> <span style="font-size: 1.0em; font-family: 'Helvetica'; color: white; background-color: #90e0ef; vertical-align: top; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">20240522</span> <span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;"> <br> 🏆 The Overall Best Performing Open-source 8B Model 🏆 <br> 🚀 Outperforms Llama-3-8B-Instruct and open-source finetunes/merges 🚀 </span> </a> </div> <div style="display: flex; justify-content: center; align-items: center; width: 110%; margin-left: -5%;"> <img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/benchmarks-openchat-3.6-20240522.svg" style="width: 100%; border-radius: 1em"> </div> <div style="display: flex; justify-content: center; align-items: center"> <p>* Llama-3-Instruct often fails to follow the few-shot templates. See <a href="https://huggingface.co/openchat/openchat-3.6-8b-20240522/discussions/6">example</a>.</p> </div> <div align="center"> <h2> Usage </h2> </div> To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command. Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience. If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server. | Model | Size | Context | Weights | Serving | |-----------------------|------|---------|-------------------------------------------------------------------------|---------------------------------------------------------------------------------------| | OpenChat-3.6-20240522 | 8B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.6-8b-20240522) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.6-8b-20240522` | <details> <summary>Example request (click to expand)</summary> ```bash curl http://localhost:18888/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "openchat_3.6", "messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}] }' ``` </details> ### Conversation templates 💡 **Default Mode**: Best for coding, chat and general tasks. It's a modified version of the Llama 3 Instruct template, the only difference is role names, which are either `GPT4 Correct User` or `GPT4 Correct Assistant` ``` <|start_header_id|>GPT4 Correct User<|end_header_id|>\n\nHello<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\nHi<|eot_id|><|start_header_id|>GPT4 Correct User<|end_header_id|>\n\nHow are you today?<|eot_id|><|start_header_id|>GPT4 Correct Assistant<|end_header_id|>\n\n ``` ⚠️ **Notice:** Remember to set `<|eot_id|>` as end of generation token. The default template is also available as the integrated `tokenizer.chat_template`, which can be used instead of manually specifying the template: ```python messages = [ {"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi"}, {"role": "user", "content": "How are you today?"} ] tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True) ``` ## Inference using Transformers ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "openchat/openchat-3.6-8b-20240522" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto") messages = [ {"role": "user", "content": "Explain how large language models work in detail."}, ] input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) outputs = model.generate(input_ids, do_sample=True, temperature=0.5, max_new_tokens=1024 ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` <div align="center"> <h2> Limitations </h2> </div> **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model. **Safety** OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing from you and collaborating on this exciting project! **Project Lead:** - Guan Wang [imonenext at gmail dot com] - [Alpay Ariyak](https://github.com/alpayariyak) [aariyak at wpi dot edu] <div align="center"> <h2> Citation </h2> </div> ``` @article{wang2023openchat, title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data}, author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang}, journal={arXiv preprint arXiv:2309.11235}, year={2023} } ```
RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf
RichardErkhov
2024-05-31T05:35:50Z
410
0
null
[ "gguf", "region:us" ]
null
2024-05-31T01:19:56Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Gugugo-koen-7B-V1.1 - GGUF - Model creator: https://huggingface.co/squarelike/ - Original model: https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Gugugo-koen-7B-V1.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q2_K.gguf) | Q2_K | 2.42GB | | [Gugugo-koen-7B-V1.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.IQ3_XS.gguf) | IQ3_XS | 2.67GB | | [Gugugo-koen-7B-V1.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.IQ3_S.gguf) | IQ3_S | 2.81GB | | [Gugugo-koen-7B-V1.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q3_K_S.gguf) | Q3_K_S | 2.81GB | | [Gugugo-koen-7B-V1.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.IQ3_M.gguf) | IQ3_M | 2.97GB | | [Gugugo-koen-7B-V1.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q3_K.gguf) | Q3_K | 3.14GB | | [Gugugo-koen-7B-V1.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q3_K_M.gguf) | Q3_K_M | 3.14GB | | [Gugugo-koen-7B-V1.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q3_K_L.gguf) | Q3_K_L | 3.42GB | | [Gugugo-koen-7B-V1.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.IQ4_XS.gguf) | IQ4_XS | 3.47GB | | [Gugugo-koen-7B-V1.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q4_0.gguf) | Q4_0 | 3.64GB | | [Gugugo-koen-7B-V1.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.IQ4_NL.gguf) | IQ4_NL | 3.66GB | | [Gugugo-koen-7B-V1.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q4_K_S.gguf) | Q4_K_S | 3.67GB | | [Gugugo-koen-7B-V1.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q4_K.gguf) | Q4_K | 3.88GB | | [Gugugo-koen-7B-V1.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q4_K_M.gguf) | Q4_K_M | 3.88GB | | [Gugugo-koen-7B-V1.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q4_1.gguf) | Q4_1 | 4.03GB | | [Gugugo-koen-7B-V1.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q5_0.gguf) | Q5_0 | 4.42GB | | [Gugugo-koen-7B-V1.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q5_K_S.gguf) | Q5_K_S | 4.42GB | | [Gugugo-koen-7B-V1.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q5_K.gguf) | Q5_K | 2.16GB | | [Gugugo-koen-7B-V1.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q5_K_M.gguf) | Q5_K_M | 0.65GB | | [Gugugo-koen-7B-V1.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q5_1.gguf) | Q5_1 | 0.62GB | | [Gugugo-koen-7B-V1.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q6_K.gguf) | Q6_K | 0.56GB | | [Gugugo-koen-7B-V1.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/squarelike_-_Gugugo-koen-7B-V1.1-gguf/blob/main/Gugugo-koen-7B-V1.1.Q8_0.gguf) | Q8_0 | 0.63GB | Original model description: --- license: apache-2.0 datasets: - squarelike/sharegpt_deepl_ko_translation language: - en - ko pipeline_tag: translation --- # Gugugo-koen-7B-V1.1 Detail repo: [https://github.com/jwj7140/Gugugo](https://github.com/jwj7140/Gugugo) ![Gugugo](./logo.png) **Base Model**: [Llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b) **Training Dataset**: [sharegpt_deepl_ko_translation](https://huggingface.co/datasets/squarelike/sharegpt_deepl_ko_translation). I trained with 1x A6000 GPUs for 90 hours. ## **Prompt Template** **KO->EN** ``` ### 한국어: {sentence}</끝> ### 영어: ``` **EN->KO** ``` ### 영어: {sentence}</끝> ### 한국어: ``` There are GPTQ, AWQ, and GGUF support. [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GPTQ) [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-AWQ) [https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF](https://huggingface.co/squarelike/Gugugo-koen-7B-V1.1-GGUF) ## **Implementation Code** ```python from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList import torch repo = "squarelike/Gugugo-koen-7B-V1.1" model = AutoModelForCausalLM.from_pretrained( repo, load_in_4bit=True device_map='auto' ) tokenizer = AutoTokenizer.from_pretrained(repo) class StoppingCriteriaSub(StoppingCriteria): def __init__(self, stops = [], encounters=1): super().__init__() self.stops = [stop for stop in stops] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor): for stop in self.stops: if torch.all((stop == input_ids[0][-len(stop):])).item(): return True return False stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda") stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)]) def gen(lan="en", x=""): if (lan == "ko"): prompt = f"### 한국어: {x}</끝>\n### 영어:" else: prompt = f"### 영어: {x}</끝>\n### 한국어:" gened = model.generate( **tokenizer( prompt, return_tensors='pt', return_token_type_ids=False ).to("cuda"), max_new_tokens=2000, temperature=0.3, # no_repeat_ngram_size=5, num_beams=5, stopping_criteria=stopping_criteria ) return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "") print(gen(lan="en", x="Hello, world!")) ```
MarcusLoren/MeshGPT-preview
MarcusLoren
2024-06-08T01:23:37Z
410
16
transformers
[ "transformers", "text-to-3d", "arxiv:2311.15475", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-to-3d
2024-06-06T01:31:14Z
--- license: apache-2.0 pipeline_tag: text-to-3d --- <p style="text-align: center;"> <a href="https://huggingface.co/spaces/MarcusLoren/MeshGPT" style="font-size: 24px; font-weight: 600;">DEMO</a> </p> ### MeshGPT-alpha-preview MeshGPT is a text-to-3D model based on an autoencoder (tokenizer) and a transformer to generate the tokens. The autoencoder's purpose is to be able to translate 3D models into tokens which then the decoder part of it can convert back to 3D mesh.<br/> For all purposes and definitions the autoencoder is the **world first** published **3D model tokenizer**! (correct me if i'm wrong!) ## Model Details The autoencoder (tokenizer) is a relative small model using 50M parameters and the transformer model uses 184M parameters and the core is based on GPT2-small. Due to hardware contraints it's trained using a codebook/vocabablity size of 2048.<br/> Devoloped & trained by: Me with credits for MeshGPT codebase to [Phil Wang](https://github.com/lucidrains) ## Preformance: CPU 10 triangles/s<br/> 3060 GPU: 40 triangles/s<br/> 4090 GPU: 110 triangles/s<br/> ### Warning: This model has been created without any sponsors or renting any GPU hardware, so it has a very limited capability in terms what it can generate. It can handle fine single objects such as 'chair' or 'table' but more complex objects requires more training (see training dataset section). The is also a problem with the face orientation since the triangles order was optimized for the model before training. However this will be fixed in later versions. ### Usage: Install: ``` pip install git+https://github.com/MarcusLoppe/meshgpt-pytorch.git ``` ``` import torch from meshgpt_pytorch import ( MeshAutoencoder, MeshTransformer, mesh_render ) device = "cuda" if torch.cuda.is_available() else "cpu" transformer = MeshTransformer.from_pretrained("MarcusLoren/MeshGPT-preview").to(device) output = [] output.append((transformer.generate(texts = ['sofa','bed', 'computer screen', 'bench', 'chair', 'table' ] , temperature = 0.0) )) output.append((transformer.generate(texts = ['milk carton', 'door', 'shovel', 'heart', 'trash can', 'ladder'], temperature = 0.0) )) output.append((transformer.generate(texts = ['hammer', 'pedestal', 'pickaxe', 'wooden cross', 'coffee bean', 'crowbar'], temperature = 0.0) )) output.append((transformer.generate(texts = ['key', 'minecraft character', 'dragon head', 'open book', 'minecraft turtle', 'wooden table'], temperature = 0.0) )) output.append((transformer.generate(texts = ['gun', 'ice cream cone', 'axe', 'helicopter', 'shotgun', 'plastic bottle'], temperature = 0.0) )) mesh_render.save_rendering(f'./render.obj', output) ``` ## Expected output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e233acec775bfe0d5cbc6/K04Qj_xgwmNT_MldTA1l8.png) Random samples generated by text only: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e233acec775bfe0d5cbc6/UH1r5s9Lfj4sUSgClqhrf.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/657e233acec775bfe0d5cbc6/oxZnaUldcmvGfJprWLa-w.png) ## Training dataset I've only had access to the free tier GPU on kaggle so this model is only trained on 4k models with max 250 triangles. The dataset contains total of 800 text labels so in terms what it can generate it's limited. 3D models was sourced from [objaverse](https://huggingface.co/datasets/allenai/objaverse), [shapenet](https://huggingface.co/datasets/ShapeNet/shapenetcore-gltf) and [ModelNet40](https://www.kaggle.com/datasets/balraj98/modelnet40-princeton-3d-object-dataset/data). ## How it works: MeshGPT uses an autoencoder which takes 3D mesh (has support for quads but not implemented in this model) then quantizes them into a codebook which can be used as tokens. The second part of MeshGPT is the transformer that trains on the tokens generated by the autoencoder while cross-attending to a text embedding. The final product is a tokenizer and a transformer that can input a text embedding and then autoregressive generate a 3D model based on the text input. The tokens generated by the transformer can then be converted into 3D mesh using the autoencoder. ## Credits The idea for MeshGPT came from the paper ( https://arxiv.org/abs/2311.15475 ) but the creators didn't release any code or model. Phil Wang (https://github.com/lucidrains) drew inspiration from the paper and did a ton of improvements over the papers implementation and created the repo : https://github.com/lucidrains/meshgpt-pytorch My goal has been to figure out how to train and implement MeshGPT into reality. <br/> Many thanks to [K. S. Ernest](https://github.com/fire) who helped me with the gradio demo aswell as helping me training the upcoming model on a larger dataset. See my github repo for a notebook on how to get started training your own MeshGPT! [MarcusLoppe/meshgpt-pytorch](https://github.com/MarcusLoppe/meshgpt-pytorch/)
hop1um/blip-vqa-rad
hop1um
2024-06-20T09:06:53Z
410
0
transformers
[ "transformers", "safetensors", "blip", "visual-question-answering", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
visual-question-answering
2024-06-15T06:43:49Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf
RichardErkhov
2024-06-16T07:49:53Z
410
0
null
[ "gguf", "region:us" ]
null
2024-06-15T18:15:56Z
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) MixtureofMerges-MoE-4x7b-v4 - GGUF - Model creator: https://huggingface.co/jsfs11/ - Original model: https://huggingface.co/jsfs11/MixtureofMerges-MoE-4x7b-v4/ | Name | Quant method | Size | | ---- | ---- | ---- | | [MixtureofMerges-MoE-4x7b-v4.Q2_K.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q2_K.gguf) | Q2_K | 8.24GB | | [MixtureofMerges-MoE-4x7b-v4.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.IQ3_XS.gguf) | IQ3_XS | 9.21GB | | [MixtureofMerges-MoE-4x7b-v4.IQ3_S.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.IQ3_S.gguf) | IQ3_S | 9.73GB | | [MixtureofMerges-MoE-4x7b-v4.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q3_K_S.gguf) | Q3_K_S | 9.72GB | | [MixtureofMerges-MoE-4x7b-v4.IQ3_M.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.IQ3_M.gguf) | IQ3_M | 9.92GB | | [MixtureofMerges-MoE-4x7b-v4.Q3_K.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q3_K.gguf) | Q3_K | 10.79GB | | [MixtureofMerges-MoE-4x7b-v4.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q3_K_M.gguf) | Q3_K_M | 10.79GB | | [MixtureofMerges-MoE-4x7b-v4.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q3_K_L.gguf) | Q3_K_L | 11.68GB | | [MixtureofMerges-MoE-4x7b-v4.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.IQ4_XS.gguf) | IQ4_XS | 12.15GB | | [MixtureofMerges-MoE-4x7b-v4.Q4_0.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q4_0.gguf) | Q4_0 | 12.69GB | | [MixtureofMerges-MoE-4x7b-v4.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.IQ4_NL.gguf) | IQ4_NL | 12.81GB | | [MixtureofMerges-MoE-4x7b-v4.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q4_K_S.gguf) | Q4_K_S | 12.8GB | | [MixtureofMerges-MoE-4x7b-v4.Q4_K.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q4_K.gguf) | Q4_K | 13.61GB | | [MixtureofMerges-MoE-4x7b-v4.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q4_K_M.gguf) | Q4_K_M | 13.61GB | | [MixtureofMerges-MoE-4x7b-v4.Q4_1.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q4_1.gguf) | Q4_1 | 14.09GB | | [MixtureofMerges-MoE-4x7b-v4.Q5_0.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q5_0.gguf) | Q5_0 | 15.48GB | | [MixtureofMerges-MoE-4x7b-v4.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q5_K_S.gguf) | Q5_K_S | 15.48GB | | [MixtureofMerges-MoE-4x7b-v4.Q5_K.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q5_K.gguf) | Q5_K | 15.96GB | | [MixtureofMerges-MoE-4x7b-v4.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q5_K_M.gguf) | Q5_K_M | 15.96GB | | [MixtureofMerges-MoE-4x7b-v4.Q5_1.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q5_1.gguf) | Q5_1 | 16.88GB | | [MixtureofMerges-MoE-4x7b-v4.Q6_K.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q6_K.gguf) | Q6_K | 18.46GB | | [MixtureofMerges-MoE-4x7b-v4.Q8_0.gguf](https://huggingface.co/RichardErkhov/jsfs11_-_MixtureofMerges-MoE-4x7b-v4-gguf/blob/main/MixtureofMerges-MoE-4x7b-v4.Q8_0.gguf) | Q8_0 | 23.9GB | Original model description: --- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - flemmingmiguel/MBX-7B-v3 - Kukedlc/NeuTrixOmniBe-7B-model-remix - PetroGPT/WestSeverus-7B-DPO - vanillaOVO/supermario_v4 base_model: - flemmingmiguel/MBX-7B-v3 - Kukedlc/NeuTrixOmniBe-7B-model-remix - PetroGPT/WestSeverus-7B-DPO - vanillaOVO/supermario_v4 model-index: - name: MixtureofMerges-MoE-4x7b-v4 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 72.53 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 88.85 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.53 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 75.3 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.85 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 71.34 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=jsfs11/MixtureofMerges-MoE-4x7b-v4 name: Open LLM Leaderboard --- # MixtureofMerges-MoE-4x7b-v4 MixtureofMerges-MoE-4x7b-v4 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) * [Kukedlc/NeuTrixOmniBe-7B-model-remix](https://huggingface.co/Kukedlc/NeuTrixOmniBe-7B-model-remix) * [PetroGPT/WestSeverus-7B-DPO](https://huggingface.co/PetroGPT/WestSeverus-7B-DPO) * [vanillaOVO/supermario_v4](https://huggingface.co/vanillaOVO/supermario_v4) ## 🧩 Configuration ```yaml base_model: Kukedlc/NeuTrixOmniBe-7B-model-remix gate_mode: hidden dtype: bfloat16 experts: - source_model: flemmingmiguel/MBX-7B-v3 positive_prompts: - "Answer this question from the ARC (Argument Reasoning Comprehension)." - "Use common sense and logical reasoning skills." - "What assumptions does this argument rely on?" - "Are these assumptions valid? Explain." - "Could this be explained in a different way? Provide an alternative explanation." - "Identify any weaknesses in this argument." - "Does this argument contain any logical fallacies? If so, which ones?" negative_prompts: - "misses key evidence" - "overly general" - "focuses on irrelevant details" - "assumes information not provided" - "relies on stereotypes" - source_model: Kukedlc/NeuTrixOmniBe-7B-model-remix positive_prompts: - "Answer this question, demonstrating commonsense understanding and using any relevant general knowledge you may have." - "Provide a concise summary of this passage, then explain why the highlighted section is essential to the main idea." - "Read these two brief articles presenting different viewpoints on the same topic. List their key arguments and highlight where they disagree." - "Paraphrase this statement, changing the emotional tone but keeping the core meaning intact. Example: Rephrase a worried statement in a humorous way" - "Create a short analogy that helps illustrate the main concept of this article." negative_prompts: - "sounds too basic" - "understated" - "dismisses important details" - "avoids the question's nuance" - "takes this statement too literally" - source_model: PetroGPT/WestSeverus-7B-DPO positive_prompts: - "Calculate the answer to this math problem" - "My mathematical capabilities are strong, allowing me to handle complex mathematical queries" - "solve for" - "A store sells apples at $0.50 each. If Emily buys 12 apples, how much does she need to pay?" - "Isolate x in the following equation: 2x + 5 = 17" - "Solve this equation and show your working." - "Explain why you used this formula to solve the problem." - "Attempt to divide this number by zero. Explain why this cannot be done." negative_prompts: - "incorrect" - "inaccurate" - "creativity" - "assumed without proof" - "rushed calculation" - "confuses mathematical concepts" - "draws illogical conclusions" - "circular reasoning" - source_model: vanillaOVO/supermario_v4 positive_prompts: - "Generate a few possible continuations to this scenario." - "Demonstrate understanding of everyday commonsense in your response." - "Use contextual clues to determine the most likely outcome." - "Continue this scenario, but make the writing style sound archaic and overly formal." - "This narrative is predictable. Can you introduce an unexpected yet plausible twist?" - "The character is angry. Continue this scenario showcasing a furious outburst." negative_prompts: - "repetitive phrases" - "overuse of the same words" - "contradicts earlier statements - breaks the internal logic of the scenario" - "out of character dialogue" - "awkward phrasing - sounds unnatural" - "doesn't match the given genre" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "jsfs11/MixtureofMerges-MoE-4x7b-v4" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_jsfs11__MixtureofMerges-MoE-4x7b-v4) | Metric |Value| |---------------------------------|----:| |Avg. |76.23| |AI2 Reasoning Challenge (25-Shot)|72.53| |HellaSwag (10-Shot) |88.85| |MMLU (5-Shot) |64.53| |TruthfulQA (0-shot) |75.30| |Winogrande (5-shot) |84.85| |GSM8k (5-shot) |71.34|
weiiv/term_Qwen2_7_json_new
weiiv
2024-06-20T13:48:41Z
410
0
transformers
[ "transformers", "safetensors", "gguf", "qwen2", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-06-20T13:42:10Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
khalidalt/DeBERTa-v3-large-mnli
khalidalt
2021-11-22T08:38:23Z
409
5
transformers
[ "transformers", "pytorch", "deberta-v2", "text-classification", "zero-shot-classification", "en", "arxiv:2006.03654", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en tags: - text-classification - zero-shot-classification metrics: - accuracy widget: - text: "The Movie have been criticized for the story. However, I think it is a great movie. [SEP] I liked the movie." --- # DeBERTa-v3-large-mnli ## Model description This model was trained on the Multi-Genre Natural Language Inference ( MultiNLI ) dataset, which consists of 433k sentence pairs textual entailment information. The model used is [DeBERTa-v3-large from Microsoft](https://huggingface.co/microsoft/deberta-large). The v3 DeBERTa outperforms the result of Bert and RoBERTa in majority of NLU benchmarks by using disentangled attention and enhanced mask decoder. More information about the orginal model is on [official repository](https://github.com/microsoft/DeBERTa) and the [paper](https://arxiv.org/abs/2006.03654) ## Intended uses & limitations #### How to use the model ```python premise = "The Movie have been criticized for the story. However, I think it is a great movie." hypothesis = "I liked the movie." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu" prediction = torch.softmax(output["logits"][0], -1) label_names = ["entailment", "neutral", "contradiction"] print(label_names[prediction.argmax(0).tolist()]) ``` ### Training data This model was trained on the MultiNLI dataset, which consists of 392K sentence textual entitlement. ### Training procedure DeBERTa-v3-large-mnli was trained using the Hugging Face trainer with the following hyperparameters. ``` train_args = TrainingArguments( learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=8, num_train_epochs=3, warmup_ratio=0.06, weight_decay=0.1, fp16=True, seed=42, ) ``` ### BibTeX entry and citation info Please cite the [DeBERTa paper](https://arxiv.org/abs/2006.03654) and [MultiNLI Dataset](https://cims.nyu.edu/~sbowman/multinli/paper.pdf) if you use this model and include this Huggingface hub.