RichardErkhov commited on
Commit
23ba337
·
verified ·
1 Parent(s): 552ed10

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Meta-Llama-3.1-8B-Instruct-Summarizer - GGUF
11
+ - Model creator: https://huggingface.co/raaec/
12
+ - Original model: https://huggingface.co/raaec/Meta-Llama-3.1-8B-Instruct-Summarizer/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q2_K.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q2_K.gguf) | Q2_K | 2.96GB |
18
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
19
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.IQ3_S.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.IQ3_S.gguf) | IQ3_S | 3.43GB |
20
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
21
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.IQ3_M.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.IQ3_M.gguf) | IQ3_M | 3.52GB |
22
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K.gguf) | Q3_K | 3.74GB |
23
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
24
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
25
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
26
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_0.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_0.gguf) | Q4_0 | 4.34GB |
27
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
28
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
29
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_K.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_K.gguf) | Q4_K | 4.58GB |
30
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
31
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_1.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q4_1.gguf) | Q4_1 | 4.78GB |
32
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_0.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_0.gguf) | Q5_0 | 5.21GB |
33
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
34
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_K.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_K.gguf) | Q5_K | 5.34GB |
35
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
36
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_1.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q5_1.gguf) | Q5_1 | 5.65GB |
37
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q6_K.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q6_K.gguf) | Q6_K | 6.14GB |
38
+ | [Meta-Llama-3.1-8B-Instruct-Summarizer.Q8_0.gguf](https://huggingface.co/RichardErkhov/raaec_-_Meta-Llama-3.1-8B-Instruct-Summarizer-gguf/blob/main/Meta-Llama-3.1-8B-Instruct-Summarizer.Q8_0.gguf) | Q8_0 | 7.95GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ pipeline_tag: summarization
47
+ widget:
48
+ - text: >-
49
+ Hugging Face: Revolutionizing Natural Language Processing Introduction In
50
+ the rapidly evolving field of Natural Language Processing (NLP), Hugging
51
+ Face has emerged as a prominent and innovative force. This article will
52
+ explore the story and significance of Hugging Face, a company that has
53
+ made remarkable contributions to NLP and AI as a whole. From its inception
54
+ to its role in democratizing AI, Hugging Face has left an indelible mark
55
+ on the industry. The Birth of Hugging Face Hugging Face was founded in
56
+ 2016 by Clément Delangue, Julien Chaumond, and Thomas Wolf. The name
57
+ Hugging Face was chosen to reflect the company's mission of making AI
58
+ models more accessible and friendly to humans, much like a comforting hug.
59
+ Initially, they began as a chatbot company but later shifted their focus
60
+ to NLP, driven by their belief in the transformative potential of this
61
+ technology. Transformative Innovations Hugging Face is best known for its
62
+ open-source contributions, particularly the Transformers library. This
63
+ library has become the de facto standard for NLP and enables researchers,
64
+ developers, and organizations to easily access and utilize
65
+ state-of-the-art pre-trained language models, such as BERT, GPT-3, and
66
+ more. These models have countless applications, from chatbots and virtual
67
+ assistants to language translation and sentiment analysis.
68
+ example_title: Summarization Example 1
69
+ ---
70
+
71
+ ## Model Information
72
+
73
+ This is a fine-tuned version of Llama 3.1 trained in English, Spanish, and Chinese for text summarization.
74
+
75
+ The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes (text in/text out). The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
76
+
77
+ **Model developer**: Meta
78
+
79
+ **Model Architecture:** Llama 3.1 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
80
+