Update README.md
Browse files
README.md
CHANGED
@@ -8,10 +8,10 @@ inference: false
|
|
8 |
FalconLit2 is a fine-tuned and quantized [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS `g5.12x` instance with [TGI 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3), making it suitable for applications that require high performance in resource-constrained environments.
|
9 |
|
10 |
FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite), and their similarities and differences are summarized below:
|
11 |
-
|Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation|
|
12 |
-
|
13 |
-
| FalconLite | No | 4-bit GPTQ |12K | [dNTK](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) |
|
14 |
-
| FalconLite2 | Yes | 4-bit GPTQ |24K | rope_theta = 1000000 |
|
15 |
|
16 |
## Model Details
|
17 |
|
@@ -20,7 +20,7 @@ FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite),
|
|
20 |
- **Language:** English
|
21 |
- **Finetuned from weights:** [Falcon 40B SFT OASST-TOP1 model](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
|
22 |
- **Finetuned on data:** [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled) and [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions)
|
23 |
-
- **
|
24 |
- **Model License:** Apache 2.0
|
25 |
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
|
26 |
|
|
|
8 |
FalconLit2 is a fine-tuned and quantized [Falcon 40B](https://huggingface.co/tiiuae/falcon-40b) language model, capable of processing long (up to 24K tokens) input sequences. By utilizing 4-bit [GPTQ quantization](https://github.com/PanQiWei/AutoGPTQ) and adapted RotaryEmbedding, FalconLite2 is able to process 10x longer contexts while consuming 4x less GPU memory than the original model. FalconLite2 is useful for applications such as topic retrieval, summarization, and question-answering. FalconLite2 can be deployed on a single AWS `g5.12x` instance with [TGI 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3), making it suitable for applications that require high performance in resource-constrained environments.
|
9 |
|
10 |
FalconLite2 evolves from [FalconLite](https://huggingface.co/amazon/FalconLite), and their similarities and differences are summarized below:
|
11 |
+
|Model|Fine-tuned on long contexts| Quantization | Max context length| RotaryEmbedding adaptation| Inference framework|
|
12 |
+
|----------|-------------:|-------------:|------------:|-----------:|-----------:|
|
13 |
+
| FalconLite | No | 4-bit GPTQ |12K | [dNTK](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/) | TGI 0.9.2 |
|
14 |
+
| FalconLite2 | Yes | 4-bit GPTQ |24K | rope_theta = 1000000 | TGI 1.0.3 |
|
15 |
|
16 |
## Model Details
|
17 |
|
|
|
20 |
- **Language:** English
|
21 |
- **Finetuned from weights:** [Falcon 40B SFT OASST-TOP1 model](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
|
22 |
- **Finetuned on data:** [SLidingEncoder and Decoder (SLED)](https://huggingface.co/datasets/tau/sled) and [(Long) Natural Questions (NQ)](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections#multi-passage-qa-from-natural-questions)
|
23 |
+
- **Served using framework:** [Text-Generation-Inference 1.0.3](https://github.com/huggingface/text-generation-inference/tree/v1.0.3)
|
24 |
- **Model License:** Apache 2.0
|
25 |
- **Contact:** [GitHub issues](https://github.com/awslabs/extending-the-context-length-of-open-source-llms/issues)
|
26 |
|