Spaces:
Running
Running
Update config.py
Browse files
config.py
CHANGED
@@ -6,27 +6,6 @@ import time
|
|
6 |
import threading
|
7 |
|
8 |
ARENA_NAME = "# π The GPU-Poor LLM Gladiator Arena π v25.03"
|
9 |
-
ARENA_DESCRIPTION = """
|
10 |
-
**Step right up to the arena where frugal meets fabulous in the world of AI!**
|
11 |
-
Watch as our compact contenders (maxing out at 14B parameters) duke it out in a battle of wits and words.
|
12 |
-
|
13 |
-
What started as a simple experiment has grown into a popular platform for evaluating compact language models.
|
14 |
-
As the arena continues to expand with more models, features, and battles, it requires computational resources to maintain and improve.
|
15 |
-
**If you find this project valuable and would like to support its development, consider sponsoring:**
|
16 |
-
[](https://github.com/sponsors/k-mktr)
|
17 |
-
|
18 |
-
<details>
|
19 |
-
<summary>π How to Use</summary>
|
20 |
-
|
21 |
-
1. To start the battle, go to the 'Battle Arena' tab.
|
22 |
-
2. Type your prompt into the text box. Alternatively, click the "π²" button to receive a random prompt.
|
23 |
-
3. Click the "Generate Responses" button to view the models' responses.
|
24 |
-
4. Cast your vote for the model that provided the better response. In the event of a Tie, enter a new prompt before continuing the battle.
|
25 |
-
5. Check out the Leaderboard to see how models rank against each other.
|
26 |
-
|
27 |
-
More info: [README.md](https://huggingface.co/spaces/k-mktr/gpu-poor-llm-arena/blob/main/README.md)
|
28 |
-
</details>
|
29 |
-
"""
|
30 |
|
31 |
# Ollama API configuration
|
32 |
API_URL = os.environ.get("API_URL")
|
|
|
6 |
import threading
|
7 |
|
8 |
ARENA_NAME = "# π The GPU-Poor LLM Gladiator Arena π v25.03"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
# Ollama API configuration
|
11 |
API_URL = os.environ.get("API_URL")
|