dev-slx commited on
Commit
0ed0815
·
verified ·
1 Parent(s): 0f9b9bf

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SliceX AI™ ELM Turbo
2
+ **ELM** (which stands for **E**fficient **L**anguage **M**odels) **Turbo** is the next generation model in the series of cutting-edge language models from [SliceX AI](https://slicex.ai) that is designed to achieve the best in class performance in terms of _quality_, _throughput_ & _memory_.
3
+
4
+ <div align="center">
5
+ <img src="elm-turbo-starfruit.png" width="256"/>
6
+ </div>
7
+
8
+ ELM is designed to be a modular and customizable family of neural networks that are highly efficient and performant. Today we are sharing the second version in this series: **ELM-Turbo** models (named _Starfruit_).
9
+
10
+ _Model:_ ELM Turbo introduces a more _adaptable_, _decomposable LLM architecture_ thereby yielding flexibility in (de)-composing LLM models into smaller stand-alone slices. In comparison to our previous version, the new architecture allows for more powerful model slices to be learnt during the training process (yielding better quality & higher generative capacity) and a higher level of control wrt LLM efficiency - fine-grained slices to produce varying LLM model sizes (depending on the user/task needs and deployment criteria, i.e., Cloud or Edge device constraints).
11
+
12
+ _Training:_ ELM Turbo introduces algorithmic optimizations that allows us to train a single model but once trained the ELM Turbo model can be sliced in many ways to fit different user/task needs.
13
+ ELM introduces a new type of _(de)-composable LLM model architecture_ along with the algorithmic optimizations required to learn (training) and run (inference) these models. At a high level, we train a single ELM model in a self-supervised manner (during pre-training phase) but once trained the ELM model can be sliced in many ways to fit different user/task needs. The optimizations can be applied to the model either during the pre-training and/or fine-tuning stage.
14
+
15
+ _Fast Inference with Customization:_ As with our previous version, once trained, ELM Turbo model architecture permits flexible inference strategies at runtime depending on deployment & device constraints to allow users to make optimal compute/memory tradeoff choices for their application needs. In addition to the blazing fast speeds achieved by native ELM Turbo slice optimization, we also layered in NVIDIA's TensorRT-LLM integration to get further speedups. The end result 👉 optimized ELM Turbo models that achieve one of the world's best LLM performance.
16
+
17
+ - **Blog:** [Medium](https://medium.com/sujith-ravi/introducing-elm-efficient-customizable-privacy-preserving-llms-cea56e4f727d)
18
+
19
+ - **Github:** https://github.com/slicex-ai/elm-turbo
20
+
21
+ - **HuggingFace** (access ELM Turbo Models in HF): 👉 [here](https://huggingface.co/collections/slicexai/elm-turbo-66945032f3626024aa066fde)
22
+
23
+ ## ELM Turbo Model Release
24
+ In this version, we employed our new, improved decomposable ELM techniques on a widely used open-source LLM, `microsoft/Phi-3-mini-128k-instruct` (3.82B params) (check [phi3-license] for usage)(https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE). After training, we generated three smaller slices with parameter counts ranging from 1.33 billion to 2.01 billion. Furthermore, we seamlessly integrated these slices into NVIDIA's [TensoRT-LLM](https://github.com/NVIDIA/TensorRT-LLM), providing trtllm engines compatible with A100 and H100 GPUs, respectively.
25
+
26
+ - [Section 1.](https://github.com/slicex-ai/elm-turbo/blob/main/README.md#1-run-elm-turbo-models-with-huggingface-transformers-library) 👉 instructions to run ELM-Turbo with the Huggingface Transformers library :hugs:.
27
+ - [Section 2.](https://github.com/slicex-ai/elm-turbo/blob/main/README.md#2-running-elm-turbo-via-nvidias-tensorrt-llm) 👉 instructions to run ELM-Turbo engines powered by NVIDIA's TensoRT-LLM.
28
+
29
+ **NOTE**: The open-source datasets from the HuggingFace hub used for instruction fine-tuning ELM Turbo include, but are not limited to: `allenai/tulu-v2-sft-mixture`, `microsoft/orca-math-word-problems-200k`, `mlabonne/WizardLM_evol_instruct_70k-ShareGPT`, and `mlabonne/WizardLM_evol_instruct_v2_196K-ShareGPT`. We advise users to exercise caution when utilizing ELM Turbo, as these datasets may contain factually incorrect information, unintended biases, inappropriate content, and other potential issues. It is recommended to thoroughly evaluate the model's outputs and implement appropriate safeguards for your specific use case.
30
+
31
+ ## 1. Run ELM Turbo models with Huggingface Transformers library.
32
+ There are three ELM Turbo slices derived from the `phi3-mini` (3.82B params) model: 1. `slicexai/elm-turbo-0.125-instruct` (1.33B params), 2. `slicexai/elm-turbo-0.25-instruct`(1.56B params), 3. `slicexai/elm-turbo-0.50-instruct` (2.01B params).
33
+
34
+ Required packages for [Hugginface Phi-3-mini](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct).
35
+ ```bash
36
+ flash_attn==2.5.8
37
+ torch==2.3.1
38
+ accelerate==0.31.0
39
+ transformers==4.41.2
40
+ ```
41
+
42
+ Example - To run the `slicexai/elm-turbo-0.125-instruct`
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
45
+ import torch
46
+
47
+ elm_turbo_model = "slicexai/elm-turbo-0.50-instruct"
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ elm_turbo_model,
50
+ device_map="cuda",
51
+ torch_dtype=torch.bfloat16,
52
+ trust_remote_code=True,
53
+ attn_implementation="flash_attention_2"
54
+ )
55
+ messages = [
56
+ {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
57
+ ]
58
+
59
+ tokenizer = AutoTokenizer.from_pretrained(elm_turbo_model, legacy=False)
60
+ pipe = pipeline(
61
+ "text-generation",
62
+ model=model,
63
+ tokenizer=tokenizer,
64
+ )
65
+
66
+ generation_args = {
67
+ "max_new_tokens": 500,
68
+ "return_full_text": False,
69
+ "repetition_penalty": 1.2,
70
+ "temperature": 0.0,
71
+ "do_sample": False,
72
+ }
73
+
74
+ output = pipe(messages, **generation_args)
75
+ print(output[0]['generated_text'])
76
+ ```