Update README.md
Browse files
README.md
CHANGED
@@ -79,8 +79,8 @@ Any model can provide inaccurate or incomplete information, and should be used i
|
|
79 |
The fastest way to get started with BLING is through direct import in transformers:
|
80 |
|
81 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
82 |
-
tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-
|
83 |
-
model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-
|
84 |
|
85 |
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
|
86 |
|
|
|
79 |
The fastest way to get started with BLING is through direct import in transformers:
|
80 |
|
81 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
82 |
+
tokenizer = AutoTokenizer.from_pretrained("dragon-yi-6b-v0")
|
83 |
+
model = AutoModelForCausalLM.from_pretrained("dragon-yi-6b-v0")
|
84 |
|
85 |
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
|
86 |
|