Update README.md
Browse files
README.md
CHANGED
@@ -7,18 +7,17 @@ pipeline_tag: text-generation
|
|
7 |
tags:
|
8 |
- tulpa
|
9 |
- CognitionAI
|
10 |
-
- Orca
|
11 |
- llama-2
|
12 |
- llama-2-70b
|
13 |
---
|
14 |
|
15 |
|
16 |
-
#
|
|
|
17 |
|
18 |
## Model Details
|
19 |
|
20 |
-
* **Developed by**: [
|
21 |
-
* **Backbone Model**: [LLaMA-2](https://github.com/facebookresearch/llama/tree/main)
|
22 |
|
23 |
## Dataset Details
|
24 |
|
@@ -27,37 +26,13 @@ PENDING
|
|
27 |
|
28 |
### Prompt Template
|
29 |
```
|
30 |
-
###
|
31 |
-
{
|
32 |
-
### User:
|
33 |
-
{User}
|
34 |
### Assistant:
|
35 |
-
{
|
36 |
```
|
37 |
|
38 |
-
## Usage
|
39 |
-
|
40 |
-
```python
|
41 |
-
import torch
|
42 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
43 |
-
tokenizer = AutoTokenizer.from_pretrained("deepnight-nexus/llama2-70b-inst")
|
44 |
-
model = AutoModelForCausalLM.from_pretrained(
|
45 |
-
"deepnight-nexus/llama2-70b-inst",
|
46 |
-
device_map="auto",
|
47 |
-
torch_dtype=torch.float16,
|
48 |
-
load_in_8bit=True,
|
49 |
-
rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
|
50 |
-
)
|
51 |
-
prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
|
52 |
-
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
53 |
-
del inputs["token_type_ids"]
|
54 |
-
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
|
55 |
-
output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
|
56 |
-
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
57 |
-
```
|
58 |
-
|
59 |
-
|
60 |
## Contact Us
|
61 |
-
### About
|
62 |
-
- [
|
63 |
-
If you have a dataset to build domain specific LLMs or make LLM applications, please contact us at ► [
|
|
|
7 |
tags:
|
8 |
- tulpa
|
9 |
- CognitionAI
|
|
|
10 |
- llama-2
|
11 |
- llama-2-70b
|
12 |
---
|
13 |
|
14 |
|
15 |
+
# Tulpa 70B
|
16 |
+
Details soon....
|
17 |
|
18 |
## Model Details
|
19 |
|
20 |
+
* **Developed by**: [Cognition AI REDOTHELINKITSAPLACEHOLDERFROMTHEOTHERVERSIONOFTHECARD](https://nbdfsbdfmsbn.com/)
|
|
|
21 |
|
22 |
## Dataset Details
|
23 |
|
|
|
26 |
|
27 |
### Prompt Template
|
28 |
```
|
29 |
+
### Instruction:
|
30 |
+
{Pormpt & Backstory}
|
|
|
|
|
31 |
### Assistant:
|
32 |
+
{Output}
|
33 |
```
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
## Contact Us
|
36 |
+
### About Cognition AI
|
37 |
+
- [Cognition AI](https://nbdfsbdfmsbn.com/) ADDMORE
|
38 |
+
If you have a dataset to build domain specific LLMs or make LLM applications, please contact us at ► [Contact Us](mailto:addeddress@cog.com)
|