Update README.md
Browse files
README.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1 |
GGUF format files of the model vinai/PhoGPT-4B-Chat.
|
2 |
|
3 |
-
|
4 |
|
5 |
-
I
|
6 |
|
7 |
```
|
8 |
from llama_cpp import Llama
|
@@ -18,4 +18,6 @@ llama_load_model_from_file: failed to load model
|
|
18 |
...
|
19 |
```
|
20 |
|
21 |
-
|
|
|
|
|
|
1 |
GGUF format files of the model vinai/PhoGPT-4B-Chat.
|
2 |
|
3 |
+
This model file is compatible with the latest llama.cpp
|
4 |
|
5 |
+
Context: I was trying to get PhoGPT to work with llama-cpp and llama-cpp-python. I found [nguyenviet/PhoGPT-4B-Chat-GGUF](https://huggingface.co/nguyenviet/PhoGPT-4B-Chat-GGUF) but cannot get it to work:
|
6 |
|
7 |
```
|
8 |
from llama_cpp import Llama
|
|
|
18 |
...
|
19 |
```
|
20 |
|
21 |
+
After my opening [issue](https://github.com/VinAIResearch/PhoGPT/issues/22) at the PhoGPT repo was resolved, I was able to create the gguf file.
|
22 |
+
|
23 |
+
I figure people want to try the model in Colab. So here it is, so you don't have to create it yourself
|