tom1669 commited on
Commit
fb412e2
·
verified ·
1 Parent(s): 2586bc0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,8 +1,8 @@
1
  GGUF format files of the model vinai/PhoGPT-4B-Chat.
2
 
3
- I'm trying to get PhoGPT to work with llama-cpp and llama-cpp-python.
4
 
5
- I cannot get [nguyenviet/PhoGPT-4B-Chat-GGUF](https://huggingface.co/nguyenviet/PhoGPT-4B-Chat-GGUF) to work in Colab:
6
 
7
  ```
8
  from llama_cpp import Llama
@@ -18,4 +18,6 @@ llama_load_model_from_file: failed to load model
18
  ...
19
  ```
20
 
21
- My [issue](https://github.com/VinAIResearch/PhoGPT/issues/22) was resolved (thanks to @nviet and @datquocnguyen), and I figure people want to try the model in Colab. So I created my own `GGUF` file.
 
 
 
1
  GGUF format files of the model vinai/PhoGPT-4B-Chat.
2
 
3
+ This model file is compatible with the latest llama.cpp
4
 
5
+ Context: I was trying to get PhoGPT to work with llama-cpp and llama-cpp-python. I found [nguyenviet/PhoGPT-4B-Chat-GGUF](https://huggingface.co/nguyenviet/PhoGPT-4B-Chat-GGUF) but cannot get it to work:
6
 
7
  ```
8
  from llama_cpp import Llama
 
18
  ...
19
  ```
20
 
21
+ After my opening [issue](https://github.com/VinAIResearch/PhoGPT/issues/22) at the PhoGPT repo was resolved, I was able to create the gguf file.
22
+
23
+ I figure people want to try the model in Colab. So here it is, so you don't have to create it yourself