Update README.md
Browse files
README.md
CHANGED
@@ -96,11 +96,11 @@ If you have a minute, I’d really appreciate it if you could test my Phi-4-Mini
|
|
96 |
💬 Click the **chat icon** (bottom right of the main and dashboard pages) . Then toggle between the LLM Types Phi-4-Mini-Instruct is called TestLLM : TurboLLM -> FreeLLM -> TestLLM.
|
97 |
|
98 |
### What I'm Testing
|
99 |
-
I'm experimenting with **function calling** against my network monitoring service. Using small open source models.
|
100 |
-
🟡 **TestLLM** – Runs **Phi-4-mini-instruct** using phi-4-mini-q4_0.gguf , llama.cpp on 6
|
101 |
|
102 |
### The other Available AI Assistants
|
103 |
-
🟢 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited
|
104 |
🔵 **FreeLLM** – Runs **open-source Hugging Face models** Medium speed (unlimited, subject to Hugging Face API availability).
|
105 |
|
106 |
|
|
|
96 |
💬 Click the **chat icon** (bottom right of the main and dashboard pages) . Then toggle between the LLM Types Phi-4-Mini-Instruct is called TestLLM : TurboLLM -> FreeLLM -> TestLLM.
|
97 |
|
98 |
### What I'm Testing
|
99 |
+
I'm experimenting with **function calling** against my network monitoring service. Using small open source models. I am into the question "How small can it go and still function".
|
100 |
+
🟡 **TestLLM** – Runs **Phi-4-mini-instruct** using phi-4-mini-q4_0.gguf , llama.cpp on 6 threads of a Cpu VM (Should take about 15s to load. Inference speed is quite slow and it only processes one user prompt at a time—still working on scaling!). If you're curious, I'd be happy to share how it works! .
|
101 |
|
102 |
### The other Available AI Assistants
|
103 |
+
🟢 **TurboLLM** – Uses **gpt-4o-mini** Fast! . Note: tokens are limited since OpenAI models are pricey, but you can [Login](https://freenetworkmonitor.click) or [Download](https://freenetworkmonitor.click/download) the Free Network Monitor agent to get more tokens, Alternatively use the FreeLLM .
|
104 |
🔵 **FreeLLM** – Runs **open-source Hugging Face models** Medium speed (unlimited, subject to Hugging Face API availability).
|
105 |
|
106 |
|