LoupGarou commited on
Commit
032a633
·
verified ·
1 Parent(s): 36a866b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -20,7 +20,7 @@ This model card describes the deepseek-coder-33b-instruct-pythagora version 3 mo
20
 
21
  ### Model Sources
22
 
23
- - **Repository:** [LoupGarou/deepseek-coder-33b-instruct-pythagora-gguf](https://huggingface.co/LoupGarou/deepseek-coder-33b-instruct-pythagora-v3-gguf)
24
  - **GitHub Repository (Proxy Application):** [MoonlightByte/Pythagora-LLM-Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy)
25
  - **Original Model Repository:** [DeepSeek Coder](https://github.com/deepseek-ai/deepseek-coder)
26
 
@@ -46,12 +46,12 @@ Users should familiarize themselves with the [Pythagora GPT Pilot](https://githu
46
 
47
  ## How to Get Started with the Model
48
 
49
- To use this model with the Pythagora GPT Pilot application:
50
 
51
- 1. Set up the Pythagora LLM Proxy by following the instructions in the [GitHub repository](https://github.com/MoonlightByte/Pythagora-LLM-Proxy).
52
- 2. Configure GPT Pilot to use the proxy by setting the OpenAI API endpoint to `http://localhost:8080/v1/chat/completions`.
53
- 3. Run GPT Pilot as usual, and the proxy will handle the communication between GPT Pilot and the deepseek-coder-6.7b-instruct-pythagora model.
54
- 4. It is possible to run Pythagora directly to LM Studio or any other service with mixed results since these models were not finetuned using a chat format.
55
 
56
  For more detailed instructions and examples, please refer to the [Pythagora LLM Proxy README](https://github.com/MoonlightByte/Pythagora-LLM-Proxy/blob/main/README.md).
57
 
 
20
 
21
  ### Model Sources
22
 
23
+ - **Repository:** [LoupGarou/deepseek-coder-33b-instruct-pythagora-v3-gguf](https://huggingface.co/LoupGarou/deepseek-coder-33b-instruct-pythagora-v3-gguf)
24
  - **GitHub Repository (Proxy Application):** [MoonlightByte/Pythagora-LLM-Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy)
25
  - **Original Model Repository:** [DeepSeek Coder](https://github.com/deepseek-ai/deepseek-coder)
26
 
 
46
 
47
  ## How to Get Started with the Model
48
 
49
+ To use this model with the Pythagora GPT Pilot applicationand the Pythagora-LLM-Proxy:
50
 
51
+ 1. Set up the Pythagora LLM Proxy to work with your LLM host software (i.e. LM Studio) by following the instructions in the [GitHub repository](https://github.com/MoonlightByte/Pythagora-LLM-Proxy).
52
+ 2. Configure GPT Pilot to use the Pythagora LLM Proxy by setting the OpenAI API endpoint to `http://localhost:8080/v1/chat/completions`.
53
+ 3. Run GPT Pilot as usual, and the proxy will handle the communication between GPT Pilot and the LLM host software running the deepseek-coder-6.7b-instruct-pythagora model.
54
+ 4. It is possible to run Pythagora directly to LM Studio or any other service but be cautious of the 16,384 token limitations as exceeding the limit will result in an endless loop of "invalid json" responses.
55
 
56
  For more detailed instructions and examples, please refer to the [Pythagora LLM Proxy README](https://github.com/MoonlightByte/Pythagora-LLM-Proxy/blob/main/README.md).
57