Mungert commited on
Commit
2dab99a
·
verified ·
1 Parent(s): 00b6730

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -96,8 +96,8 @@ If you have a minute, I’d really appreciate it if you could test my Phi-4-Mini
96
  💬 Click the **chat icon** (bottom right of the main and dashboard pages) . Then toggle between the LLM Types Phi-4-Mini-Instruct is called TestLLM : TurboLLM -> FreeLLM -> TestLLM.
97
 
98
  ### What I'm Testing
99
- I'm experimenting with **function calling** against my network monitoring service. If you're curious, I'd be happy to share how it works!
100
- 🟡 **TestLLM** – Runs **Phi-4-mini-instruct** using phi-4-mini-f16-q8.gguf , llama.cpp on CPU (Should take about 30s to load. Inference speed is slow and it only supports one user at a time—still working on scaling!).
101
 
102
  ### The other Available AI Assistants
103
  🟢 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited tokens since OpenAI models are pricey, but you can [Login](https://freenetworkmonitor.click) or [Download](https://freenetworkmonitor.click/download) the Free Network Monitor agent to get more tokens, Alternatively use the FreeLLM .
 
96
  💬 Click the **chat icon** (bottom right of the main and dashboard pages) . Then toggle between the LLM Types Phi-4-Mini-Instruct is called TestLLM : TurboLLM -> FreeLLM -> TestLLM.
97
 
98
  ### What I'm Testing
99
+ I'm experimenting with **function calling** against my network monitoring service. Using small open source models. If you're curious, I'd be happy to share how it works!
100
+ 🟡 **TestLLM** – Runs **Phi-4-mini-instruct** using phi-4-mini-q4_0.gguf , llama.cpp on 6 thread of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a time—still working on scaling!).
101
 
102
  ### The other Available AI Assistants
103
  🟢 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited tokens since OpenAI models are pricey, but you can [Login](https://freenetworkmonitor.click) or [Download](https://freenetworkmonitor.click/download) the Free Network Monitor agent to get more tokens, Alternatively use the FreeLLM .