Spaces:
Sleeping
Sleeping
File size: 10,589 Bytes
fc5ac41 ba68565 d7cdabb ba68565 d7cdabb ba68565 d7cdabb ba68565 d7cdabb aee28e1 d7cdabb ba68565 d7cdabb ba68565 d7cdabb aee28e1 d7cdabb ba68565 aee28e1 ba68565 6d2280a ba68565 6d2280a ba68565 6d2280a ba68565 aee28e1 ba68565 aee28e1 ba68565 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 |
<picture>
<source srcset="./assets/macrocosmos-white.png" media="(prefers-color-scheme: dark)">
<img src="macrocosmos-white.png">
</picture>
<picture>
<source srcset="./assets/macrocosmos-black.png" media="(prefers-color-scheme: light)">
<img src="macrocosmos-black.png">
</picture>
<br/>
<br/>
<br/>
# Subnet 1 API
> Note: This project is still in development and is not yet ready for production use.
The official REST API for Bittensor's flagship subnet 1 ([prompting](https://github.com/opentensor/prompting)), built by [Macrocosmos](https://macrocosmos.ai).
Subnet 1 is an decentralized open source network containing around 1000 highly capable LLM agents. These agents are capable of performing a wide range of tasks, from simple math problems to complex natural language processing tasks. As subnet 1 is constantly evolving, its capabilities are always expanding. Our goal is to provide a world-class inference engine, to be used by developers and researchers alike.
This API is designed to power applications and facilitate the interaction between subnets by providing a simple and easy-to-use interface for developers which enables:
1. **Conversation**: Chatting with the network (streaming and non-streaming)
2. **Data cleaning**: Filtering empty and otherwise useless responses
3. **Advanced inference**: Providing enhanced responses using SOTA ensembling techniques (WIP)
Validators can use this API to interact with the network and perform various tasks.
To run an API server, you will need a bittensor wallet which is registered as a validator the relevant subnet (1@mainnet or 61@testnet).
NOTE: At present, miners are choosing not to stream their responses to the network. This means that the server will not be able to provide a streamed response to the client until the miner has finished processing the request. This is a temporary measure and will be resolved in the future.
## How it works
The API server is a RESTful API that provides endpoints for interacting with the network. It is a simple [wrapper](./validators/sn1_validator_wrapper.py) around your subnet 1 validator, which makes use of the dendrite to make queries.
## Install
Create a new python environment and install the dependencies with the command.
(First time only)
```bash
python3.10 -m venv env
source env/bin/activate
pip install -r requirements.txt
```
> Note: This project requires python >=3.10.
> Note: Currently the prompting library is only installable on machines with cuda devices (NVIDIA-GPU).
## Run
First activate the virtual environment and then run the following command to start the server:
```bash
source env/bin/activate
```
Run an API server on subnet 1 with the following command:
```bash
EXPECTED_ACCESS_KEY=<ACCESS_KEY> python server.py --wallet.name <WALLET_NAME> --wallet.hotkey <WALLET_HOTKEY> --netuid <NETUID> --neuron.model_id mock --neuron.tasks math --neuron.task_p 1 --neuron.device cpu
```
The command ensures that no GPU memory is used by the server, and that the large models used by the incentive mechanism are not loaded.
> Note: This command is subject to change as the project evolves.
We recommend that you run the server using a process manager like PM2. This will ensure that the server is always running and will restart if it crashes.
```bash
EXPECTED_ACCESS_KEY=<ACCESS_KEY> pm2 start server.py --interpreter python3 --name sn1-api -- --wallet.name <WALLET_NAME> --wallet.hotkey <WALLET_HOTKEY> --netuid <NETUID> --neuron.model_id mock --neuron.tasks math --neuron.task_p 1 --neuron.device cpu
```
## API Usage
At present, the API provides two endpoints: `/chat` (live) and `/echo` (test).
`/chat` is used to chat with the network and receive a response. The endpoint requires a JSON payload with the following fields:
- `k: int`: The number of responses to return
- `timeout: float`: The time in seconds to wait for a response
- `roles: List[str]`: The roles of the agents to query
- `messages: List[str]`: The messages to send to the network
- `prefer: str`: The preferred response to use as the default view. Should be one of `{'longest', 'shortest'}`
Responses from the `/chat` endpoint are streamed back to the client as they are received from the network. Upon completion, the server will return a JSON response with the following fields:
- `streamed_chunks: List[str]`: The streamed responses from the network
- `streamed_chunks_timings: List[float]`: The time taken to receive each streamed response
- `synapse: StreamPromptingSynapse`: The synapse used to query the network. This contains full context and metadata about the query.
The `StreamPromptingSynapse` object contains the following fields:
- `uid: int`: The unique identifier of the synapse
- `completion: str`: The final response from the network
- `timing: float`: The total time taken to receive the final response
> Note: The API is subject to change as the project evolves.
## Testing
To test the API locally, you can use the following curl command:
```bash
curl --no-buffer -X POST http://0.0.0.0:10000/chat/ -H "api_key: <ACCESS_KEY>" -d '{"k": 5, "timeout": 15, "roles": ["user"], "messages": ["Tell me a happy story about a rabbit and a turtle that meet on a budget cruise around Northern Ireland"]}'
```
> Note: Use the `--no-buffer` flag to ensure that the response is streamed back to the client.
The above example prompt yields the following.
Streamed response:
```
Once upon a time, a rabbit named Rosie and a turtle named Tim embarked on a budget cruise around Northern Ireland. Despite their differences in speed, Rosie's energetic hopping and Tim's slow but steady pace, they quickly became friends during the voyage. \n\nAs they explored the stunning landscapes and quaint villages along the coast, Rosie and Tim discovered a shared love for adventure and new experiences. They enjoyed sampling local cuisine, attending traditional music sessions, and participating in fun onboard activities.\n\nOne memorable evening, under the shimmering Northern Lights, Rosie and Tim danced together on the deck, celebrating their unlikely friendship and the beauty of the world around them. Their bond transcended their differences, proving that true companionship knows no boundaries.\n\nAt the end of the cruise, as they bid farewell to their fellow travelers and the enchanting sights of Northern Ireland, Rosie and Tim knew that their special connection would last a lifetime. And so, with hearts full of joy and memories to cherish, the rabbit and the turtle set off on new adventures, forever grateful for the magical journey they shared.
```
Final JSON:
```json
{\"streamed_chunks\": [\"Once upon a time, a rabbit named Rosie and a turtle named Tim embarked on a budget cruise around Northern Ireland. Despite their differences in speed, Rosie's energetic hopping and Tim's slow but steady pace, they quickly became friends during the voyage. \\\\n\\\\nAs they explored the stunning landscapes and quaint villages along the coast, Rosie and Tim discovered a shared love for adventure and new experiences. They enjoyed sampling local cuisine, attending traditional music sessions, and participating in fun onboard activities.\\\\n\\\\nOne memorable evening, under the shimmering Northern Lights, Rosie and Tim danced together on the deck, celebrating their unlikely friendship and the beauty of the world around them. Their bond transcended their differences, proving that true companionship knows no boundaries.\\\\n\\\\nAt the end of the cruise, as they bid farewell to their fellow travelers and the enchanting sights of Northern Ireland, Rosie and Tim knew that their special connection would last a lifetime. And so, with hearts full of joy and memories to cherish, the rabbit and the turtle set off on new adventures, forever grateful for the magical journey they shared.\"], \"streamed_chunks_timings\": [4.6420252323150635], \"uid\": 559, \"completion\": \"Once upon a time, a rabbit named Rosie and a turtle named Tim embarked on a budget cruise around Northern Ireland. Despite their differences in speed, Rosie's energetic hopping and Tim's slow but steady pace, they quickly became friends during the voyage. \\\\n\\\\nAs they explored the stunning landscapes and quaint villages along the coast, Rosie and Tim discovered a shared love for adventure and new experiences. They enjoyed sampling local cuisine, attending traditional music sessions, and participating in fun onboard activities.\\\\n\\\\nOne memorable evening, under the shimmering Northern Lights, Rosie and Tim danced together on the deck, celebrating their unlikely friendship and the beauty of the world around them. Their bond transcended their differences, proving that true companionship knows no boundaries.\\\\n\\\\nAt the end of the cruise, as they bid farewell to their fellow travelers and the enchanting sights of Northern Ireland, Rosie and Tim knew that their special connection would last a lifetime. And so, with hearts full of joy and memories to cherish, the rabbit and the turtle set off on new adventures, forever grateful for the magical journey they shared.\", \"timing\": 4.720629930496216}"
```
After verifying that the server is responding to requests locally, you can test the server on a remote machine.
### Troubleshooting
If you do not receive a response from the server, check that the server is running and that the port is open on the server. You can open the port using the following commands:
```bash
sudo ufw allow 10000/tcp
```
---
## Contributing
If you would like to contribute to the project, please read the [CONTRIBUTING.md](CONTRIBUTING.md) file for more information.
You can find out more about the project by visiting the [Macrocosmos website](https://macrocosmos.ai) or by joining us in our social channels:

[](https://substack.com/@macrocosmosai)
[](https://twitter.com/MacrocosmosAI)
[](https://twitter.com/MacrocosmosAI)
[](www.linkedin.com/in/MacrocosmosAI)
[](https://opensource.org/licenses/MIT)
|