Update README.md
Browse files
README.md
CHANGED
@@ -11,11 +11,6 @@ pipeline_tag: text-generation
|
|
11 |
|
12 |
Deer will also be available in larger models size.
|
13 |
|
14 |
-
## Model Overview
|
15 |
-
`deer-3b` is a 3 billion parameter causal language model created that is derived from
|
16 |
-
[Blooms’s] 3B model and fine-tuned
|
17 |
-
on a ~15K instructions.
|
18 |
-
|
19 |
## Usage
|
20 |
|
21 |
To use the model with the `transformers` library on a machine with GPUs.
|
@@ -33,42 +28,6 @@ res = generate_text("Explain to me the difference between nuclear fission and fu
|
|
33 |
print(res[0]["generated_text"])
|
34 |
```
|
35 |
|
36 |
-
### LangChain Usage
|
37 |
-
|
38 |
-
To use the pipeline with LangChain, you must set `return_full_text=True`, as LangChain expects the full text to be returned
|
39 |
-
and the default for the pipeline is to only return the new text.
|
40 |
-
|
41 |
-
```python
|
42 |
-
import torch
|
43 |
-
from transformers import pipeline
|
44 |
-
generate_text = pipeline(model="PSanni/Deer-3b", torch_dtype=torch.bfloat16,
|
45 |
-
trust_remote_code=True, device_map="auto", return_full_text=True)
|
46 |
-
```
|
47 |
-
|
48 |
-
You can create a prompt that either has only an instruction or has an instruction with context:
|
49 |
-
|
50 |
-
```python
|
51 |
-
from langchain import PromptTemplate, LLMChain
|
52 |
-
from langchain.llms import HuggingFacePipeline
|
53 |
-
# template for an instrution with no input
|
54 |
-
prompt = PromptTemplate(
|
55 |
-
input_variables=["instruction"],
|
56 |
-
template="{instruction}")
|
57 |
-
# template for an instruction with input
|
58 |
-
prompt_with_context = PromptTemplate(
|
59 |
-
input_variables=["instruction", "context"],
|
60 |
-
template="{instruction}\n\nInput:\n{context}")
|
61 |
-
hf_pipeline = HuggingFacePipeline(pipeline=generate_text)
|
62 |
-
llm_chain = LLMChain(llm=hf_pipeline, prompt=prompt)
|
63 |
-
llm_context_chain = LLMChain(llm=hf_pipeline, prompt=prompt_with_context)
|
64 |
-
```
|
65 |
-
|
66 |
-
Example predicting using a simple instruction:
|
67 |
-
|
68 |
-
```python
|
69 |
-
print(llm_chain.predict(instruction="Give me list of morning exercises.").lstrip())
|
70 |
-
```
|
71 |
-
|
72 |
### Note:
|
73 |
|
74 |
Kindly note that the model isn't attuned to human preferences and could generate unsuitable, unethical, biased, and toxic responses.
|
|
|
11 |
|
12 |
Deer will also be available in larger models size.
|
13 |
|
|
|
|
|
|
|
|
|
|
|
14 |
## Usage
|
15 |
|
16 |
To use the model with the `transformers` library on a machine with GPUs.
|
|
|
28 |
print(res[0]["generated_text"])
|
29 |
```
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
### Note:
|
32 |
|
33 |
Kindly note that the model isn't attuned to human preferences and could generate unsuitable, unethical, biased, and toxic responses.
|