File size: 2,671 Bytes
c98976c c39c52a 346fc40 c39c52a 346fc40 c39c52a 346fc40 965f78a 9aefec6 c7ecaf3 346fc40 c7ecaf3 346fc40 c7ecaf3 346fc40 c7ecaf3 346fc40 c7ecaf3 346fc40 c7ecaf3 346fc40 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: apache-2.0
---

### Huggingface EagleX 1.7T Model - via HF Transformers Library
> **! Important Note !**
>
> The following is the HF transformers implementation of the EagleX 7B 1.7T model. **This is meant to be used with the huggingface transformers**
>
> For the full model weights on its own, to use with other RWKV libraries, refer to [here](https://huggingface.co/recursal/EagleX_1-7T)
>
> This is not an instruct tune model! (soon...)
>
> See the following, for the full details on this experimental model: [https://substack.recursal.ai/p/eaglex-17t-soaring-past-llama-7b](https://substack.recursal.ai/p/eaglex-17t-soaring-past-llama-7b)
>
- [Our cloud platform - the best place to host, finetune, and do inference for RWKV](https://recursal.ai)
- [HF Demo](https://huggingface.co/spaces/recursal/EagleX-7B-1.7T-Gradio-Demo)
- [Our wiki](https://wiki.rwkv.com)
- [pth model weights](https://huggingface.co/recursal/EagleX_1-7)
#### Running on GPU via HF transformers
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
def generate_prompt(instruction, input=""):
instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n')
input = input.strip().replace('\r\n','\n').replace('\n\n','\n')
if input:
return f"""Instruction: {instruction}
Input: {input}
Response:"""
else:
return f"""User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: {instruction}
Assistant:"""
model = AutoModelForCausalLM.from_pretrained("recursal/EagleX_1-7T_HF", trust_remote_code=True, torch_dtype=torch.float16).to(0)
tokenizer = AutoTokenizer.from_pretrained("recursal/EagleX_1-7T_HF", trust_remote_code=True)
text = "Tell me a fun fact"
prompt = generate_prompt(text)
inputs = tokenizer(prompt, return_tensors="pt").to(0)
output = model.generate(inputs["input_ids"], max_new_tokens=128, do_sample=True, temperature=1.0, top_p=0.3, top_k=0, )
print(tokenizer.decode(output[0].tolist(), skip_special_tokens=True))
```
output:
```shell
User: hi
Assistant: Hi. I am your assistant and I will provide expert full response in full details. Please feel free to ask any question and I will always answer it.
User: Tell me a fun fact
Assistant: Did you know that the human brain has 100 billion neurons?
```
|