Spaces:
Runtime error
Runtime error
patrickbdevaney
commited on
Commit
•
ffcf62f
1
Parent(s):
a78fdf8
v1 attempt at hf space api
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- CODE_OF_CONDUCT.md +128 -0
- CONTRIBUTING.md +238 -0
- Dockerfile +31 -0
- LICENSE +661 -0
- README.md +1903 -7
- SECURITY.md +38 -0
- api/advanced_api.py +1282 -0
- api/agent_api_test.py +291 -0
- api/api_telemetry_draft.txt +936 -0
- api/api_test.py +254 -0
- api/api_tests.py +472 -0
- api/main.py +981 -0
- api/requirements.txt +11 -0
- api/skypilot.yaml +37 -0
- api/test_api.py +112 -0
- docs/.readthedocs.yaml +11 -0
- docs/applications/azure_openai.md +131 -0
- docs/applications/blog.md +468 -0
- docs/applications/business-analyst-agent.md +976 -0
- docs/applications/compliance_swarm.md +0 -0
- docs/applications/customer_support.md +42 -0
- docs/applications/discord.md +105 -0
- docs/applications/enterprise.md +0 -0
- docs/applications/marketing_agencies.md +64 -0
- docs/assets/css/extra.css +27 -0
- docs/assets/img/SwarmsLogoIcon.png +0 -0
- docs/assets/img/agent_def.png +0 -0
- docs/assets/img/docs/query-plan-mini.png +0 -0
- docs/assets/img/docs/query-plan.png +0 -0
- docs/assets/img/reliabilitythrough.png +0 -0
- docs/assets/img/swarmbanner.png +0 -0
- docs/assets/img/swarms-logo.png +0 -0
- docs/assets/img/swarmsbanner.png +0 -0
- docs/assets/img/tools/output.png +0 -0
- docs/clusterops/reference.md +334 -0
- docs/corporate/2024_2025_goals.md +146 -0
- docs/corporate/architecture.md +358 -0
- docs/corporate/bounties.md +86 -0
- docs/corporate/bounty_program.md +74 -0
- docs/corporate/checklist.md +122 -0
- docs/corporate/cost_analysis.md +100 -0
- docs/corporate/culture.md +56 -0
- docs/corporate/data_room.md +112 -0
- docs/corporate/demos.md +9 -0
- docs/corporate/design.md +152 -0
- docs/corporate/distribution.md +469 -0
- docs/corporate/failures.md +104 -0
- docs/corporate/faq.md +110 -0
- docs/corporate/flywheel.md +101 -0
- docs/corporate/front_end_contributors.md +40 -0
CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contributor Covenant Code of Conduct
|
2 |
+
|
3 |
+
## Our Pledge
|
4 |
+
|
5 |
+
We as members, contributors, and leaders pledge to make participation in our
|
6 |
+
community a harassment-free experience for everyone, regardless of age, body
|
7 |
+
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
8 |
+
identity and expression, level of experience, education, socio-economic status,
|
9 |
+
nationality, personal appearance, race, religion, or sexual identity
|
10 |
+
and orientation.
|
11 |
+
|
12 |
+
We pledge to act and interact in ways that contribute to an open, welcoming,
|
13 |
+
diverse, inclusive, and healthy community.
|
14 |
+
|
15 |
+
## Our Standards
|
16 |
+
|
17 |
+
Examples of behavior that contributes to a positive environment for our
|
18 |
+
community include:
|
19 |
+
|
20 |
+
* Demonstrating empathy and kindness toward other people
|
21 |
+
* Being respectful of differing opinions, viewpoints, and experiences
|
22 |
+
* Giving and gracefully accepting constructive feedback
|
23 |
+
* Accepting responsibility and apologizing to those affected by our mistakes,
|
24 |
+
and learning from the experience
|
25 |
+
* Focusing on what is best not just for us as individuals, but for the
|
26 |
+
overall community
|
27 |
+
|
28 |
+
Examples of unacceptable behavior include:
|
29 |
+
|
30 |
+
* The use of sexualized language or imagery, and sexual attention or
|
31 |
+
advances of any kind
|
32 |
+
* Trolling, insulting or derogatory comments, and personal or political attacks
|
33 |
+
* Public or private harassment
|
34 |
+
* Publishing others' private information, such as a physical or email
|
35 |
+
address, without their explicit permission
|
36 |
+
* Other conduct which could reasonably be considered inappropriate in a
|
37 |
+
professional setting
|
38 |
+
|
39 |
+
## Enforcement Responsibilities
|
40 |
+
|
41 |
+
Community leaders are responsible for clarifying and enforcing our standards of
|
42 |
+
acceptable behavior and will take appropriate and fair corrective action in
|
43 |
+
response to any behavior that they deem inappropriate, threatening, offensive,
|
44 |
+
or harmful.
|
45 |
+
|
46 |
+
Community leaders have the right and responsibility to remove, edit, or reject
|
47 |
+
comments, commits, code, wiki edits, issues, and other contributions that are
|
48 |
+
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
49 |
+
decisions when appropriate.
|
50 |
+
|
51 |
+
## Scope
|
52 |
+
|
53 |
+
This Code of Conduct applies within all community spaces, and also applies when
|
54 |
+
an individual is officially representing the community in public spaces.
|
55 |
+
Examples of representing our community include using an official e-mail address,
|
56 |
+
posting via an official social media account, or acting as an appointed
|
57 |
+
representative at an online or offline event.
|
58 |
+
|
59 |
+
## Enforcement
|
60 |
+
|
61 |
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
62 |
+
reported to the community leaders responsible for enforcement at
|
63 | |
64 |
+
All complaints will be reviewed and investigated promptly and fairly.
|
65 |
+
|
66 |
+
All community leaders are obligated to respect the privacy and security of the
|
67 |
+
reporter of any incident.
|
68 |
+
|
69 |
+
## Enforcement Guidelines
|
70 |
+
|
71 |
+
Community leaders will follow these Community Impact Guidelines in determining
|
72 |
+
the consequences for any action they deem in violation of this Code of Conduct:
|
73 |
+
|
74 |
+
### 1. Correction
|
75 |
+
|
76 |
+
**Community Impact**: Use of inappropriate language or other behavior deemed
|
77 |
+
unprofessional or unwelcome in the community.
|
78 |
+
|
79 |
+
**Consequence**: A private, written warning from community leaders, providing
|
80 |
+
clarity around the nature of the violation and an explanation of why the
|
81 |
+
behavior was inappropriate. A public apology may be requested.
|
82 |
+
|
83 |
+
### 2. Warning
|
84 |
+
|
85 |
+
**Community Impact**: A violation through a single incident or series
|
86 |
+
of actions.
|
87 |
+
|
88 |
+
**Consequence**: A warning with consequences for continued behavior. No
|
89 |
+
interaction with the people involved, including unsolicited interaction with
|
90 |
+
those enforcing the Code of Conduct, for a specified period of time. This
|
91 |
+
includes avoiding interactions in community spaces as well as external channels
|
92 |
+
like social media. Violating these terms may lead to a temporary or
|
93 |
+
permanent ban.
|
94 |
+
|
95 |
+
### 3. Temporary Ban
|
96 |
+
|
97 |
+
**Community Impact**: A serious violation of community standards, including
|
98 |
+
sustained inappropriate behavior.
|
99 |
+
|
100 |
+
**Consequence**: A temporary ban from any sort of interaction or public
|
101 |
+
communication with the community for a specified period of time. No public or
|
102 |
+
private interaction with the people involved, including unsolicited interaction
|
103 |
+
with those enforcing the Code of Conduct, is allowed during this period.
|
104 |
+
Violating these terms may lead to a permanent ban.
|
105 |
+
|
106 |
+
### 4. Permanent Ban
|
107 |
+
|
108 |
+
**Community Impact**: Demonstrating a pattern of violation of community
|
109 |
+
standards, including sustained inappropriate behavior, harassment of an
|
110 |
+
individual, or aggression toward or disparagement of classes of individuals.
|
111 |
+
|
112 |
+
**Consequence**: A permanent ban from any sort of public interaction within
|
113 |
+
the community.
|
114 |
+
|
115 |
+
## Attribution
|
116 |
+
|
117 |
+
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
118 |
+
version 2.0, available at
|
119 |
+
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
120 |
+
|
121 |
+
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
122 |
+
enforcement ladder](https://github.com/mozilla/diversity).
|
123 |
+
|
124 |
+
[homepage]: https://www.contributor-covenant.org
|
125 |
+
|
126 |
+
For answers to common questions about this code of conduct, see the FAQ at
|
127 |
+
https://www.contributor-covenant.org/faq. Translations are available at
|
128 |
+
https://www.contributor-covenant.org/translations.
|
CONTRIBUTING.md
ADDED
@@ -0,0 +1,238 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contribution Guidelines
|
2 |
+
|
3 |
+
---
|
4 |
+
|
5 |
+
## Table of Contents
|
6 |
+
|
7 |
+
- [Project Overview](#project-overview)
|
8 |
+
- [Getting Started](#getting-started)
|
9 |
+
- [Installation](#installation)
|
10 |
+
- [Project Structure](#project-structure)
|
11 |
+
- [How to Contribute](#how-to-contribute)
|
12 |
+
- [Reporting Issues](#reporting-issues)
|
13 |
+
- [Submitting Pull Requests](#submitting-pull-requests)
|
14 |
+
- [Coding Standards](#coding-standards)
|
15 |
+
- [Type Annotations](#type-annotations)
|
16 |
+
- [Docstrings and Documentation](#docstrings-and-documentation)
|
17 |
+
- [Testing](#testing)
|
18 |
+
- [Code Style](#code-style)
|
19 |
+
- [Areas Needing Contributions](#areas-needing-contributions)
|
20 |
+
- [Writing Tests](#writing-tests)
|
21 |
+
- [Improving Documentation](#improving-documentation)
|
22 |
+
- [Creating Training Scripts](#creating-training-scripts)
|
23 |
+
- [Community and Support](#community-and-support)
|
24 |
+
- [License](#license)
|
25 |
+
|
26 |
+
---
|
27 |
+
|
28 |
+
## Project Overview
|
29 |
+
|
30 |
+
**swarms** is a library focused on making it simple to orchestrate agents to automate real-world activities. The goal is to automate the world economy with these swarms of agents.
|
31 |
+
|
32 |
+
We need your help to:
|
33 |
+
|
34 |
+
- **Write Tests**: Ensure the reliability and correctness of the codebase.
|
35 |
+
- **Improve Documentation**: Maintain clear and comprehensive documentation.
|
36 |
+
- **Add New Orchestration Methods**: Add multi-agent orchestration methods
|
37 |
+
- **Removing Defunct Code**: Removing bad code
|
38 |
+
|
39 |
+
|
40 |
+
|
41 |
+
Your contributions will help us push the boundaries of AI and make this library a valuable resource for the community.
|
42 |
+
|
43 |
+
---
|
44 |
+
|
45 |
+
## Getting Started
|
46 |
+
|
47 |
+
### Installation
|
48 |
+
|
49 |
+
You can install swarms using `pip`:
|
50 |
+
|
51 |
+
```bash
|
52 |
+
pip3 install swarms
|
53 |
+
```
|
54 |
+
|
55 |
+
Alternatively, you can clone the repository:
|
56 |
+
|
57 |
+
```bash
|
58 |
+
git clone https://github.com/kyegomez/swarms
|
59 |
+
```
|
60 |
+
|
61 |
+
### Project Structure
|
62 |
+
|
63 |
+
- **`swarms/`**: Contains all the source code for the library.
|
64 |
+
- **`examples/`**: Includes example scripts and notebooks demonstrating how to use the library.
|
65 |
+
- **`tests/`**: (To be created) Will contain unit tests for the library.
|
66 |
+
- **`docs/`**: (To be maintained) Contains documentation files.
|
67 |
+
|
68 |
+
---
|
69 |
+
|
70 |
+
## How to Contribute
|
71 |
+
|
72 |
+
### Reporting Issues
|
73 |
+
|
74 |
+
If you find any bugs, inconsistencies, or have suggestions for enhancements, please open an issue on GitHub:
|
75 |
+
|
76 |
+
1. **Search Existing Issues**: Before opening a new issue, check if it has already been reported.
|
77 |
+
2. **Open a New Issue**: If it hasn't been reported, create a new issue and provide detailed information.
|
78 |
+
- **Title**: A concise summary of the issue.
|
79 |
+
- **Description**: Detailed description, steps to reproduce, expected behavior, and any relevant logs or screenshots.
|
80 |
+
3. **Label Appropriately**: Use labels to categorize the issue (e.g., bug, enhancement, documentation).
|
81 |
+
|
82 |
+
### Submitting Pull Requests
|
83 |
+
|
84 |
+
We welcome pull requests (PRs) for bug fixes, improvements, and new features. Please follow these guidelines:
|
85 |
+
|
86 |
+
1. **Fork the Repository**: Create a personal fork of the repository on GitHub.
|
87 |
+
2. **Clone Your Fork**: Clone your forked repository to your local machine.
|
88 |
+
|
89 |
+
```bash
|
90 |
+
git clone https://github.com/kyegomez/swarms.git
|
91 |
+
```
|
92 |
+
|
93 |
+
3. **Create a New Branch**: Use a descriptive branch name.
|
94 |
+
|
95 |
+
```bash
|
96 |
+
git checkout -b feature/your-feature-name
|
97 |
+
```
|
98 |
+
|
99 |
+
4. **Make Your Changes**: Implement your code, ensuring it adheres to the coding standards.
|
100 |
+
5. **Add Tests**: Write tests to cover your changes.
|
101 |
+
6. **Commit Your Changes**: Write clear and concise commit messages.
|
102 |
+
|
103 |
+
```bash
|
104 |
+
git commit -am "Add feature X"
|
105 |
+
```
|
106 |
+
|
107 |
+
7. **Push to Your Fork**:
|
108 |
+
|
109 |
+
```bash
|
110 |
+
git push origin feature/your-feature-name
|
111 |
+
```
|
112 |
+
|
113 |
+
8. **Create a Pull Request**:
|
114 |
+
|
115 |
+
- Go to the original repository on GitHub.
|
116 |
+
- Click on "New Pull Request".
|
117 |
+
- Select your branch and create the PR.
|
118 |
+
- Provide a clear description of your changes and reference any related issues.
|
119 |
+
|
120 |
+
9. **Respond to Feedback**: Be prepared to make changes based on code reviews.
|
121 |
+
|
122 |
+
**Note**: It's recommended to create small and focused PRs for easier review and faster integration.
|
123 |
+
|
124 |
+
---
|
125 |
+
|
126 |
+
## Coding Standards
|
127 |
+
|
128 |
+
To maintain code quality and consistency, please adhere to the following standards.
|
129 |
+
|
130 |
+
### Type Annotations
|
131 |
+
|
132 |
+
- **Mandatory**: All functions and methods must have type annotations.
|
133 |
+
- **Example**:
|
134 |
+
|
135 |
+
```python
|
136 |
+
def add_numbers(a: int, b: int) -> int:
|
137 |
+
return a + b
|
138 |
+
```
|
139 |
+
|
140 |
+
- **Benefits**:
|
141 |
+
- Improves code readability.
|
142 |
+
- Helps with static type checking tools.
|
143 |
+
|
144 |
+
### Docstrings and Documentation
|
145 |
+
|
146 |
+
- **Docstrings**: Every public class, function, and method must have a docstring following the [Google Python Style Guide](http://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) or [NumPy Docstring Standard](https://numpydoc.readthedocs.io/en/latest/format.html).
|
147 |
+
- **Content**:
|
148 |
+
- **Description**: Briefly describe what the function or class does.
|
149 |
+
- **Args**: List and describe each parameter.
|
150 |
+
- **Returns**: Describe the return value(s).
|
151 |
+
- **Raises**: List any exceptions that are raised.
|
152 |
+
|
153 |
+
- **Example**:
|
154 |
+
|
155 |
+
```python
|
156 |
+
def calculate_mean(values: List[float]) -> float:
|
157 |
+
"""
|
158 |
+
Calculates the mean of a list of numbers.
|
159 |
+
|
160 |
+
Args:
|
161 |
+
values (List[float]): A list of numerical values.
|
162 |
+
|
163 |
+
Returns:
|
164 |
+
float: The mean of the input values.
|
165 |
+
|
166 |
+
Raises:
|
167 |
+
ValueError: If the input list is empty.
|
168 |
+
"""
|
169 |
+
if not values:
|
170 |
+
raise ValueError("The input list is empty.")
|
171 |
+
return sum(values) / len(values)
|
172 |
+
```
|
173 |
+
|
174 |
+
- **Documentation**: Update or create documentation pages if your changes affect the public API.
|
175 |
+
|
176 |
+
### Testing
|
177 |
+
|
178 |
+
- **Required**: All new features and bug fixes must include appropriate unit tests.
|
179 |
+
- **Framework**: Use `unittest`, `pytest`, or a similar testing framework.
|
180 |
+
- **Test Location**: Place tests in the `tests/` directory, mirroring the structure of `swarms/`.
|
181 |
+
- **Test Coverage**: Aim for high test coverage to ensure code reliability.
|
182 |
+
- **Running Tests**: Provide instructions for running tests.
|
183 |
+
|
184 |
+
```bash
|
185 |
+
pytest tests/
|
186 |
+
```
|
187 |
+
|
188 |
+
### Code Style
|
189 |
+
|
190 |
+
- **PEP 8 Compliance**: Follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guidelines.
|
191 |
+
- **Linting Tools**: Use `flake8`, `black`, or `pylint` to check code style.
|
192 |
+
- **Consistency**: Maintain consistency with the existing codebase.
|
193 |
+
|
194 |
+
---
|
195 |
+
|
196 |
+
## Areas Needing Contributions
|
197 |
+
|
198 |
+
We have several areas where contributions are particularly welcome.
|
199 |
+
|
200 |
+
### Writing Tests
|
201 |
+
|
202 |
+
- **Goal**: Increase test coverage to ensure the library's robustness.
|
203 |
+
- **Tasks**:
|
204 |
+
- Write unit tests for existing code in `swarms/`.
|
205 |
+
- Identify edge cases and potential failure points.
|
206 |
+
- Ensure tests are repeatable and independent.
|
207 |
+
|
208 |
+
### Improving Documentation
|
209 |
+
|
210 |
+
- **Goal**: Maintain clear and comprehensive documentation for users and developers.
|
211 |
+
- **Tasks**:
|
212 |
+
- Update docstrings to reflect any changes.
|
213 |
+
- Add examples and tutorials in the `examples/` directory.
|
214 |
+
- Improve or expand the content in the `docs/` directory.
|
215 |
+
|
216 |
+
### Creating Multi-Agent Orchestration Methods
|
217 |
+
|
218 |
+
- **Goal**: Provide new multi-agent orchestration methods
|
219 |
+
|
220 |
+
---
|
221 |
+
|
222 |
+
## Community and Support
|
223 |
+
|
224 |
+
- **Communication**: Engage with the community by participating in discussions on issues and pull requests.
|
225 |
+
- **Respect**: Maintain a respectful and inclusive environment.
|
226 |
+
- **Feedback**: Be open to receiving and providing constructive feedback.
|
227 |
+
|
228 |
+
---
|
229 |
+
|
230 |
+
## License
|
231 |
+
|
232 |
+
By contributing to swarms, you agree that your contributions will be licensed under the [MIT License](LICENSE).
|
233 |
+
|
234 |
+
---
|
235 |
+
|
236 |
+
Thank you for contributing to swarms! Your efforts help make this project better for everyone.
|
237 |
+
|
238 |
+
If you have any questions or need assistance, please feel free to open an issue or reach out to the maintainers.
|
Dockerfile
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Use an official Python runtime as a parent image
|
2 |
+
FROM python:3.11-slim
|
3 |
+
|
4 |
+
# Set environment variables
|
5 |
+
ENV PYTHONDONTWRITEBYTECODE 1
|
6 |
+
ENV PYTHONUNBUFFERED 1
|
7 |
+
|
8 |
+
# Set the working directory in the container
|
9 |
+
WORKDIR /usr/src/swarms
|
10 |
+
|
11 |
+
# Install system dependencies (useful for building packages)
|
12 |
+
RUN apt-get update && apt-get install -y build-essential libpq-dev && rm -rf /var/lib/apt/lists/*
|
13 |
+
|
14 |
+
# Copy the requirements file to the container
|
15 |
+
COPY requirements.txt .
|
16 |
+
|
17 |
+
# Install Python dependencies
|
18 |
+
RUN pip install --upgrade pip
|
19 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
20 |
+
|
21 |
+
# Install the 'swarms' package from the local repository
|
22 |
+
RUN pip install -e .
|
23 |
+
|
24 |
+
# Copy the rest of the application files to the container
|
25 |
+
COPY . .
|
26 |
+
|
27 |
+
# Expose the port for FastAPI
|
28 |
+
EXPOSE 8000
|
29 |
+
|
30 |
+
# Command to run the FastAPI app using Uvicorn
|
31 |
+
CMD ["uvicorn", "api.main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
|
LICENSE
ADDED
@@ -0,0 +1,661 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
GNU AFFERO GENERAL PUBLIC LICENSE
|
2 |
+
Version 3, 19 November 2007
|
3 |
+
|
4 |
+
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
5 |
+
Everyone is permitted to copy and distribute verbatim copies
|
6 |
+
of this license document, but changing it is not allowed.
|
7 |
+
|
8 |
+
Preamble
|
9 |
+
|
10 |
+
The GNU Affero General Public License is a free, copyleft license for
|
11 |
+
software and other kinds of works, specifically designed to ensure
|
12 |
+
cooperation with the community in the case of network server software.
|
13 |
+
|
14 |
+
The licenses for most software and other practical works are designed
|
15 |
+
to take away your freedom to share and change the works. By contrast,
|
16 |
+
our General Public Licenses are intended to guarantee your freedom to
|
17 |
+
share and change all versions of a program--to make sure it remains free
|
18 |
+
software for all its users.
|
19 |
+
|
20 |
+
When we speak of free software, we are referring to freedom, not
|
21 |
+
price. Our General Public Licenses are designed to make sure that you
|
22 |
+
have the freedom to distribute copies of free software (and charge for
|
23 |
+
them if you wish), that you receive source code or can get it if you
|
24 |
+
want it, that you can change the software or use pieces of it in new
|
25 |
+
free programs, and that you know you can do these things.
|
26 |
+
|
27 |
+
Developers that use our General Public Licenses protect your rights
|
28 |
+
with two steps: (1) assert copyright on the software, and (2) offer
|
29 |
+
you this License which gives you legal permission to copy, distribute
|
30 |
+
and/or modify the software.
|
31 |
+
|
32 |
+
A secondary benefit of defending all users' freedom is that
|
33 |
+
improvements made in alternate versions of the program, if they
|
34 |
+
receive widespread use, become available for other developers to
|
35 |
+
incorporate. Many developers of free software are heartened and
|
36 |
+
encouraged by the resulting cooperation. However, in the case of
|
37 |
+
software used on network servers, this result may fail to come about.
|
38 |
+
The GNU General Public License permits making a modified version and
|
39 |
+
letting the public access it on a server without ever releasing its
|
40 |
+
source code to the public.
|
41 |
+
|
42 |
+
The GNU Affero General Public License is designed specifically to
|
43 |
+
ensure that, in such cases, the modified source code becomes available
|
44 |
+
to the community. It requires the operator of a network server to
|
45 |
+
provide the source code of the modified version running there to the
|
46 |
+
users of that server. Therefore, public use of a modified version, on
|
47 |
+
a publicly accessible server, gives the public access to the source
|
48 |
+
code of the modified version.
|
49 |
+
|
50 |
+
An older license, called the Affero General Public License and
|
51 |
+
published by Affero, was designed to accomplish similar goals. This is
|
52 |
+
a different license, not a version of the Affero GPL, but Affero has
|
53 |
+
released a new version of the Affero GPL which permits relicensing under
|
54 |
+
this license.
|
55 |
+
|
56 |
+
The precise terms and conditions for copying, distribution and
|
57 |
+
modification follow.
|
58 |
+
|
59 |
+
TERMS AND CONDITIONS
|
60 |
+
|
61 |
+
0. Definitions.
|
62 |
+
|
63 |
+
"This License" refers to version 3 of the GNU Affero General Public License.
|
64 |
+
|
65 |
+
"Copyright" also means copyright-like laws that apply to other kinds of
|
66 |
+
works, such as semiconductor masks.
|
67 |
+
|
68 |
+
"The Program" refers to any copyrightable work licensed under this
|
69 |
+
License. Each licensee is addressed as "you". "Licensees" and
|
70 |
+
"recipients" may be individuals or organizations.
|
71 |
+
|
72 |
+
To "modify" a work means to copy from or adapt all or part of the work
|
73 |
+
in a fashion requiring copyright permission, other than the making of an
|
74 |
+
exact copy. The resulting work is called a "modified version" of the
|
75 |
+
earlier work or a work "based on" the earlier work.
|
76 |
+
|
77 |
+
A "covered work" means either the unmodified Program or a work based
|
78 |
+
on the Program.
|
79 |
+
|
80 |
+
To "propagate" a work means to do anything with it that, without
|
81 |
+
permission, would make you directly or secondarily liable for
|
82 |
+
infringement under applicable copyright law, except executing it on a
|
83 |
+
computer or modifying a private copy. Propagation includes copying,
|
84 |
+
distribution (with or without modification), making available to the
|
85 |
+
public, and in some countries other activities as well.
|
86 |
+
|
87 |
+
To "convey" a work means any kind of propagation that enables other
|
88 |
+
parties to make or receive copies. Mere interaction with a user through
|
89 |
+
a computer network, with no transfer of a copy, is not conveying.
|
90 |
+
|
91 |
+
An interactive user interface displays "Appropriate Legal Notices"
|
92 |
+
to the extent that it includes a convenient and prominently visible
|
93 |
+
feature that (1) displays an appropriate copyright notice, and (2)
|
94 |
+
tells the user that there is no warranty for the work (except to the
|
95 |
+
extent that warranties are provided), that licensees may convey the
|
96 |
+
work under this License, and how to view a copy of this License. If
|
97 |
+
the interface presents a list of user commands or options, such as a
|
98 |
+
menu, a prominent item in the list meets this criterion.
|
99 |
+
|
100 |
+
1. Source Code.
|
101 |
+
|
102 |
+
The "source code" for a work means the preferred form of the work
|
103 |
+
for making modifications to it. "Object code" means any non-source
|
104 |
+
form of a work.
|
105 |
+
|
106 |
+
A "Standard Interface" means an interface that either is an official
|
107 |
+
standard defined by a recognized standards body, or, in the case of
|
108 |
+
interfaces specified for a particular programming language, one that
|
109 |
+
is widely used among developers working in that language.
|
110 |
+
|
111 |
+
The "System Libraries" of an executable work include anything, other
|
112 |
+
than the work as a whole, that (a) is included in the normal form of
|
113 |
+
packaging a Major Component, but which is not part of that Major
|
114 |
+
Component, and (b) serves only to enable use of the work with that
|
115 |
+
Major Component, or to implement a Standard Interface for which an
|
116 |
+
implementation is available to the public in source code form. A
|
117 |
+
"Major Component", in this context, means a major essential component
|
118 |
+
(kernel, window system, and so on) of the specific operating system
|
119 |
+
(if any) on which the executable work runs, or a compiler used to
|
120 |
+
produce the work, or an object code interpreter used to run it.
|
121 |
+
|
122 |
+
The "Corresponding Source" for a work in object code form means all
|
123 |
+
the source code needed to generate, install, and (for an executable
|
124 |
+
work) run the object code and to modify the work, including scripts to
|
125 |
+
control those activities. However, it does not include the work's
|
126 |
+
System Libraries, or general-purpose tools or generally available free
|
127 |
+
programs which are used unmodified in performing those activities but
|
128 |
+
which are not part of the work. For example, Corresponding Source
|
129 |
+
includes interface definition files associated with source files for
|
130 |
+
the work, and the source code for shared libraries and dynamically
|
131 |
+
linked subprograms that the work is specifically designed to require,
|
132 |
+
such as by intimate data communication or control flow between those
|
133 |
+
subprograms and other parts of the work.
|
134 |
+
|
135 |
+
The Corresponding Source need not include anything that users
|
136 |
+
can regenerate automatically from other parts of the Corresponding
|
137 |
+
Source.
|
138 |
+
|
139 |
+
The Corresponding Source for a work in source code form is that
|
140 |
+
same work.
|
141 |
+
|
142 |
+
2. Basic Permissions.
|
143 |
+
|
144 |
+
All rights granted under this License are granted for the term of
|
145 |
+
copyright on the Program, and are irrevocable provided the stated
|
146 |
+
conditions are met. This License explicitly affirms your unlimited
|
147 |
+
permission to run the unmodified Program. The output from running a
|
148 |
+
covered work is covered by this License only if the output, given its
|
149 |
+
content, constitutes a covered work. This License acknowledges your
|
150 |
+
rights of fair use or other equivalent, as provided by copyright law.
|
151 |
+
|
152 |
+
You may make, run and propagate covered works that you do not
|
153 |
+
convey, without conditions so long as your license otherwise remains
|
154 |
+
in force. You may convey covered works to others for the sole purpose
|
155 |
+
of having them make modifications exclusively for you, or provide you
|
156 |
+
with facilities for running those works, provided that you comply with
|
157 |
+
the terms of this License in conveying all material for which you do
|
158 |
+
not control copyright. Those thus making or running the covered works
|
159 |
+
for you must do so exclusively on your behalf, under your direction
|
160 |
+
and control, on terms that prohibit them from making any copies of
|
161 |
+
your copyrighted material outside their relationship with you.
|
162 |
+
|
163 |
+
Conveying under any other circumstances is permitted solely under
|
164 |
+
the conditions stated below. Sublicensing is not allowed; section 10
|
165 |
+
makes it unnecessary.
|
166 |
+
|
167 |
+
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
168 |
+
|
169 |
+
No covered work shall be deemed part of an effective technological
|
170 |
+
measure under any applicable law fulfilling obligations under article
|
171 |
+
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
172 |
+
similar laws prohibiting or restricting circumvention of such
|
173 |
+
measures.
|
174 |
+
|
175 |
+
When you convey a covered work, you waive any legal power to forbid
|
176 |
+
circumvention of technological measures to the extent such circumvention
|
177 |
+
is effected by exercising rights under this License with respect to
|
178 |
+
the covered work, and you disclaim any intention to limit operation or
|
179 |
+
modification of the work as a means of enforcing, against the work's
|
180 |
+
users, your or third parties' legal rights to forbid circumvention of
|
181 |
+
technological measures.
|
182 |
+
|
183 |
+
4. Conveying Verbatim Copies.
|
184 |
+
|
185 |
+
You may convey verbatim copies of the Program's source code as you
|
186 |
+
receive it, in any medium, provided that you conspicuously and
|
187 |
+
appropriately publish on each copy an appropriate copyright notice;
|
188 |
+
keep intact all notices stating that this License and any
|
189 |
+
non-permissive terms added in accord with section 7 apply to the code;
|
190 |
+
keep intact all notices of the absence of any warranty; and give all
|
191 |
+
recipients a copy of this License along with the Program.
|
192 |
+
|
193 |
+
You may charge any price or no price for each copy that you convey,
|
194 |
+
and you may offer support or warranty protection for a fee.
|
195 |
+
|
196 |
+
5. Conveying Modified Source Versions.
|
197 |
+
|
198 |
+
You may convey a work based on the Program, or the modifications to
|
199 |
+
produce it from the Program, in the form of source code under the
|
200 |
+
terms of section 4, provided that you also meet all of these conditions:
|
201 |
+
|
202 |
+
a) The work must carry prominent notices stating that you modified
|
203 |
+
it, and giving a relevant date.
|
204 |
+
|
205 |
+
b) The work must carry prominent notices stating that it is
|
206 |
+
released under this License and any conditions added under section
|
207 |
+
7. This requirement modifies the requirement in section 4 to
|
208 |
+
"keep intact all notices".
|
209 |
+
|
210 |
+
c) You must license the entire work, as a whole, under this
|
211 |
+
License to anyone who comes into possession of a copy. This
|
212 |
+
License will therefore apply, along with any applicable section 7
|
213 |
+
additional terms, to the whole of the work, and all its parts,
|
214 |
+
regardless of how they are packaged. This License gives no
|
215 |
+
permission to license the work in any other way, but it does not
|
216 |
+
invalidate such permission if you have separately received it.
|
217 |
+
|
218 |
+
d) If the work has interactive user interfaces, each must display
|
219 |
+
Appropriate Legal Notices; however, if the Program has interactive
|
220 |
+
interfaces that do not display Appropriate Legal Notices, your
|
221 |
+
work need not make them do so.
|
222 |
+
|
223 |
+
A compilation of a covered work with other separate and independent
|
224 |
+
works, which are not by their nature extensions of the covered work,
|
225 |
+
and which are not combined with it such as to form a larger program,
|
226 |
+
in or on a volume of a storage or distribution medium, is called an
|
227 |
+
"aggregate" if the compilation and its resulting copyright are not
|
228 |
+
used to limit the access or legal rights of the compilation's users
|
229 |
+
beyond what the individual works permit. Inclusion of a covered work
|
230 |
+
in an aggregate does not cause this License to apply to the other
|
231 |
+
parts of the aggregate.
|
232 |
+
|
233 |
+
6. Conveying Non-Source Forms.
|
234 |
+
|
235 |
+
You may convey a covered work in object code form under the terms
|
236 |
+
of sections 4 and 5, provided that you also convey the
|
237 |
+
machine-readable Corresponding Source under the terms of this License,
|
238 |
+
in one of these ways:
|
239 |
+
|
240 |
+
a) Convey the object code in, or embodied in, a physical product
|
241 |
+
(including a physical distribution medium), accompanied by the
|
242 |
+
Corresponding Source fixed on a durable physical medium
|
243 |
+
customarily used for software interchange.
|
244 |
+
|
245 |
+
b) Convey the object code in, or embodied in, a physical product
|
246 |
+
(including a physical distribution medium), accompanied by a
|
247 |
+
written offer, valid for at least three years and valid for as
|
248 |
+
long as you offer spare parts or customer support for that product
|
249 |
+
model, to give anyone who possesses the object code either (1) a
|
250 |
+
copy of the Corresponding Source for all the software in the
|
251 |
+
product that is covered by this License, on a durable physical
|
252 |
+
medium customarily used for software interchange, for a price no
|
253 |
+
more than your reasonable cost of physically performing this
|
254 |
+
conveying of source, or (2) access to copy the
|
255 |
+
Corresponding Source from a network server at no charge.
|
256 |
+
|
257 |
+
c) Convey individual copies of the object code with a copy of the
|
258 |
+
written offer to provide the Corresponding Source. This
|
259 |
+
alternative is allowed only occasionally and noncommercially, and
|
260 |
+
only if you received the object code with such an offer, in accord
|
261 |
+
with subsection 6b.
|
262 |
+
|
263 |
+
d) Convey the object code by offering access from a designated
|
264 |
+
place (gratis or for a charge), and offer equivalent access to the
|
265 |
+
Corresponding Source in the same way through the same place at no
|
266 |
+
further charge. You need not require recipients to copy the
|
267 |
+
Corresponding Source along with the object code. If the place to
|
268 |
+
copy the object code is a network server, the Corresponding Source
|
269 |
+
may be on a different server (operated by you or a third party)
|
270 |
+
that supports equivalent copying facilities, provided you maintain
|
271 |
+
clear directions next to the object code saying where to find the
|
272 |
+
Corresponding Source. Regardless of what server hosts the
|
273 |
+
Corresponding Source, you remain obligated to ensure that it is
|
274 |
+
available for as long as needed to satisfy these requirements.
|
275 |
+
|
276 |
+
e) Convey the object code using peer-to-peer transmission, provided
|
277 |
+
you inform other peers where the object code and Corresponding
|
278 |
+
Source of the work are being offered to the general public at no
|
279 |
+
charge under subsection 6d.
|
280 |
+
|
281 |
+
A separable portion of the object code, whose source code is excluded
|
282 |
+
from the Corresponding Source as a System Library, need not be
|
283 |
+
included in conveying the object code work.
|
284 |
+
|
285 |
+
A "User Product" is either (1) a "consumer product", which means any
|
286 |
+
tangible personal property which is normally used for personal, family,
|
287 |
+
or household purposes, or (2) anything designed or sold for incorporation
|
288 |
+
into a dwelling. In determining whether a product is a consumer product,
|
289 |
+
doubtful cases shall be resolved in favor of coverage. For a particular
|
290 |
+
product received by a particular user, "normally used" refers to a
|
291 |
+
typical or common use of that class of product, regardless of the status
|
292 |
+
of the particular user or of the way in which the particular user
|
293 |
+
actually uses, or expects or is expected to use, the product. A product
|
294 |
+
is a consumer product regardless of whether the product has substantial
|
295 |
+
commercial, industrial or non-consumer uses, unless such uses represent
|
296 |
+
the only significant mode of use of the product.
|
297 |
+
|
298 |
+
"Installation Information" for a User Product means any methods,
|
299 |
+
procedures, authorization keys, or other information required to install
|
300 |
+
and execute modified versions of a covered work in that User Product from
|
301 |
+
a modified version of its Corresponding Source. The information must
|
302 |
+
suffice to ensure that the continued functioning of the modified object
|
303 |
+
code is in no case prevented or interfered with solely because
|
304 |
+
modification has been made.
|
305 |
+
|
306 |
+
If you convey an object code work under this section in, or with, or
|
307 |
+
specifically for use in, a User Product, and the conveying occurs as
|
308 |
+
part of a transaction in which the right of possession and use of the
|
309 |
+
User Product is transferred to the recipient in perpetuity or for a
|
310 |
+
fixed term (regardless of how the transaction is characterized), the
|
311 |
+
Corresponding Source conveyed under this section must be accompanied
|
312 |
+
by the Installation Information. But this requirement does not apply
|
313 |
+
if neither you nor any third party retains the ability to install
|
314 |
+
modified object code on the User Product (for example, the work has
|
315 |
+
been installed in ROM).
|
316 |
+
|
317 |
+
The requirement to provide Installation Information does not include a
|
318 |
+
requirement to continue to provide support service, warranty, or updates
|
319 |
+
for a work that has been modified or installed by the recipient, or for
|
320 |
+
the User Product in which it has been modified or installed. Access to a
|
321 |
+
network may be denied when the modification itself materially and
|
322 |
+
adversely affects the operation of the network or violates the rules and
|
323 |
+
protocols for communication across the network.
|
324 |
+
|
325 |
+
Corresponding Source conveyed, and Installation Information provided,
|
326 |
+
in accord with this section must be in a format that is publicly
|
327 |
+
documented (and with an implementation available to the public in
|
328 |
+
source code form), and must require no special password or key for
|
329 |
+
unpacking, reading or copying.
|
330 |
+
|
331 |
+
7. Additional Terms.
|
332 |
+
|
333 |
+
"Additional permissions" are terms that supplement the terms of this
|
334 |
+
License by making exceptions from one or more of its conditions.
|
335 |
+
Additional permissions that are applicable to the entire Program shall
|
336 |
+
be treated as though they were included in this License, to the extent
|
337 |
+
that they are valid under applicable law. If additional permissions
|
338 |
+
apply only to part of the Program, that part may be used separately
|
339 |
+
under those permissions, but the entire Program remains governed by
|
340 |
+
this License without regard to the additional permissions.
|
341 |
+
|
342 |
+
When you convey a copy of a covered work, you may at your option
|
343 |
+
remove any additional permissions from that copy, or from any part of
|
344 |
+
it. (Additional permissions may be written to require their own
|
345 |
+
removal in certain cases when you modify the work.) You may place
|
346 |
+
additional permissions on material, added by you to a covered work,
|
347 |
+
for which you have or can give appropriate copyright permission.
|
348 |
+
|
349 |
+
Notwithstanding any other provision of this License, for material you
|
350 |
+
add to a covered work, you may (if authorized by the copyright holders of
|
351 |
+
that material) supplement the terms of this License with terms:
|
352 |
+
|
353 |
+
a) Disclaiming warranty or limiting liability differently from the
|
354 |
+
terms of sections 15 and 16 of this License; or
|
355 |
+
|
356 |
+
b) Requiring preservation of specified reasonable legal notices or
|
357 |
+
author attributions in that material or in the Appropriate Legal
|
358 |
+
Notices displayed by works containing it; or
|
359 |
+
|
360 |
+
c) Prohibiting misrepresentation of the origin of that material, or
|
361 |
+
requiring that modified versions of such material be marked in
|
362 |
+
reasonable ways as different from the original version; or
|
363 |
+
|
364 |
+
d) Limiting the use for publicity purposes of names of licensors or
|
365 |
+
authors of the material; or
|
366 |
+
|
367 |
+
e) Declining to grant rights under trademark law for use of some
|
368 |
+
trade names, trademarks, or service marks; or
|
369 |
+
|
370 |
+
f) Requiring indemnification of licensors and authors of that
|
371 |
+
material by anyone who conveys the material (or modified versions of
|
372 |
+
it) with contractual assumptions of liability to the recipient, for
|
373 |
+
any liability that these contractual assumptions directly impose on
|
374 |
+
those licensors and authors.
|
375 |
+
|
376 |
+
All other non-permissive additional terms are considered "further
|
377 |
+
restrictions" within the meaning of section 10. If the Program as you
|
378 |
+
received it, or any part of it, contains a notice stating that it is
|
379 |
+
governed by this License along with a term that is a further
|
380 |
+
restriction, you may remove that term. If a license document contains
|
381 |
+
a further restriction but permits relicensing or conveying under this
|
382 |
+
License, you may add to a covered work material governed by the terms
|
383 |
+
of that license document, provided that the further restriction does
|
384 |
+
not survive such relicensing or conveying.
|
385 |
+
|
386 |
+
If you add terms to a covered work in accord with this section, you
|
387 |
+
must place, in the relevant source files, a statement of the
|
388 |
+
additional terms that apply to those files, or a notice indicating
|
389 |
+
where to find the applicable terms.
|
390 |
+
|
391 |
+
Additional terms, permissive or non-permissive, may be stated in the
|
392 |
+
form of a separately written license, or stated as exceptions;
|
393 |
+
the above requirements apply either way.
|
394 |
+
|
395 |
+
8. Termination.
|
396 |
+
|
397 |
+
You may not propagate or modify a covered work except as expressly
|
398 |
+
provided under this License. Any attempt otherwise to propagate or
|
399 |
+
modify it is void, and will automatically terminate your rights under
|
400 |
+
this License (including any patent licenses granted under the third
|
401 |
+
paragraph of section 11).
|
402 |
+
|
403 |
+
However, if you cease all violation of this License, then your
|
404 |
+
license from a particular copyright holder is reinstated (a)
|
405 |
+
provisionally, unless and until the copyright holder explicitly and
|
406 |
+
finally terminates your license, and (b) permanently, if the copyright
|
407 |
+
holder fails to notify you of the violation by some reasonable means
|
408 |
+
prior to 60 days after the cessation.
|
409 |
+
|
410 |
+
Moreover, your license from a particular copyright holder is
|
411 |
+
reinstated permanently if the copyright holder notifies you of the
|
412 |
+
violation by some reasonable means, this is the first time you have
|
413 |
+
received notice of violation of this License (for any work) from that
|
414 |
+
copyright holder, and you cure the violation prior to 30 days after
|
415 |
+
your receipt of the notice.
|
416 |
+
|
417 |
+
Termination of your rights under this section does not terminate the
|
418 |
+
licenses of parties who have received copies or rights from you under
|
419 |
+
this License. If your rights have been terminated and not permanently
|
420 |
+
reinstated, you do not qualify to receive new licenses for the same
|
421 |
+
material under section 10.
|
422 |
+
|
423 |
+
9. Acceptance Not Required for Having Copies.
|
424 |
+
|
425 |
+
You are not required to accept this License in order to receive or
|
426 |
+
run a copy of the Program. Ancillary propagation of a covered work
|
427 |
+
occurring solely as a consequence of using peer-to-peer transmission
|
428 |
+
to receive a copy likewise does not require acceptance. However,
|
429 |
+
nothing other than this License grants you permission to propagate or
|
430 |
+
modify any covered work. These actions infringe copyright if you do
|
431 |
+
not accept this License. Therefore, by modifying or propagating a
|
432 |
+
covered work, you indicate your acceptance of this License to do so.
|
433 |
+
|
434 |
+
10. Automatic Licensing of Downstream Recipients.
|
435 |
+
|
436 |
+
Each time you convey a covered work, the recipient automatically
|
437 |
+
receives a license from the original licensors, to run, modify and
|
438 |
+
propagate that work, subject to this License. You are not responsible
|
439 |
+
for enforcing compliance by third parties with this License.
|
440 |
+
|
441 |
+
An "entity transaction" is a transaction transferring control of an
|
442 |
+
organization, or substantially all assets of one, or subdividing an
|
443 |
+
organization, or merging organizations. If propagation of a covered
|
444 |
+
work results from an entity transaction, each party to that
|
445 |
+
transaction who receives a copy of the work also receives whatever
|
446 |
+
licenses to the work the party's predecessor in interest had or could
|
447 |
+
give under the previous paragraph, plus a right to possession of the
|
448 |
+
Corresponding Source of the work from the predecessor in interest, if
|
449 |
+
the predecessor has it or can get it with reasonable efforts.
|
450 |
+
|
451 |
+
You may not impose any further restrictions on the exercise of the
|
452 |
+
rights granted or affirmed under this License. For example, you may
|
453 |
+
not impose a license fee, royalty, or other charge for exercise of
|
454 |
+
rights granted under this License, and you may not initiate litigation
|
455 |
+
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
456 |
+
any patent claim is infringed by making, using, selling, offering for
|
457 |
+
sale, or importing the Program or any portion of it.
|
458 |
+
|
459 |
+
11. Patents.
|
460 |
+
|
461 |
+
A "contributor" is a copyright holder who authorizes use under this
|
462 |
+
License of the Program or a work on which the Program is based. The
|
463 |
+
work thus licensed is called the contributor's "contributor version".
|
464 |
+
|
465 |
+
A contributor's "essential patent claims" are all patent claims
|
466 |
+
owned or controlled by the contributor, whether already acquired or
|
467 |
+
hereafter acquired, that would be infringed by some manner, permitted
|
468 |
+
by this License, of making, using, or selling its contributor version,
|
469 |
+
but do not include claims that would be infringed only as a
|
470 |
+
consequence of further modification of the contributor version. For
|
471 |
+
purposes of this definition, "control" includes the right to grant
|
472 |
+
patent sublicenses in a manner consistent with the requirements of
|
473 |
+
this License.
|
474 |
+
|
475 |
+
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
476 |
+
patent license under the contributor's essential patent claims, to
|
477 |
+
make, use, sell, offer for sale, import and otherwise run, modify and
|
478 |
+
propagate the contents of its contributor version.
|
479 |
+
|
480 |
+
In the following three paragraphs, a "patent license" is any express
|
481 |
+
agreement or commitment, however denominated, not to enforce a patent
|
482 |
+
(such as an express permission to practice a patent or covenant not to
|
483 |
+
sue for patent infringement). To "grant" such a patent license to a
|
484 |
+
party means to make such an agreement or commitment not to enforce a
|
485 |
+
patent against the party.
|
486 |
+
|
487 |
+
If you convey a covered work, knowingly relying on a patent license,
|
488 |
+
and the Corresponding Source of the work is not available for anyone
|
489 |
+
to copy, free of charge and under the terms of this License, through a
|
490 |
+
publicly available network server or other readily accessible means,
|
491 |
+
then you must either (1) cause the Corresponding Source to be so
|
492 |
+
available, or (2) arrange to deprive yourself of the benefit of the
|
493 |
+
patent license for this particular work, or (3) arrange, in a manner
|
494 |
+
consistent with the requirements of this License, to extend the patent
|
495 |
+
license to downstream recipients. "Knowingly relying" means you have
|
496 |
+
actual knowledge that, but for the patent license, your conveying the
|
497 |
+
covered work in a country, or your recipient's use of the covered work
|
498 |
+
in a country, would infringe one or more identifiable patents in that
|
499 |
+
country that you have reason to believe are valid.
|
500 |
+
|
501 |
+
If, pursuant to or in connection with a single transaction or
|
502 |
+
arrangement, you convey, or propagate by procuring conveyance of, a
|
503 |
+
covered work, and grant a patent license to some of the parties
|
504 |
+
receiving the covered work authorizing them to use, propagate, modify
|
505 |
+
or convey a specific copy of the covered work, then the patent license
|
506 |
+
you grant is automatically extended to all recipients of the covered
|
507 |
+
work and works based on it.
|
508 |
+
|
509 |
+
A patent license is "discriminatory" if it does not include within
|
510 |
+
the scope of its coverage, prohibits the exercise of, or is
|
511 |
+
conditioned on the non-exercise of one or more of the rights that are
|
512 |
+
specifically granted under this License. You may not convey a covered
|
513 |
+
work if you are a party to an arrangement with a third party that is
|
514 |
+
in the business of distributing software, under which you make payment
|
515 |
+
to the third party based on the extent of your activity of conveying
|
516 |
+
the work, and under which the third party grants, to any of the
|
517 |
+
parties who would receive the covered work from you, a discriminatory
|
518 |
+
patent license (a) in connection with copies of the covered work
|
519 |
+
conveyed by you (or copies made from those copies), or (b) primarily
|
520 |
+
for and in connection with specific products or compilations that
|
521 |
+
contain the covered work, unless you entered into that arrangement,
|
522 |
+
or that patent license was granted, prior to 28 March 2007.
|
523 |
+
|
524 |
+
Nothing in this License shall be construed as excluding or limiting
|
525 |
+
any implied license or other defenses to infringement that may
|
526 |
+
otherwise be available to you under applicable patent law.
|
527 |
+
|
528 |
+
12. No Surrender of Others' Freedom.
|
529 |
+
|
530 |
+
If conditions are imposed on you (whether by court order, agreement or
|
531 |
+
otherwise) that contradict the conditions of this License, they do not
|
532 |
+
excuse you from the conditions of this License. If you cannot convey a
|
533 |
+
covered work so as to satisfy simultaneously your obligations under this
|
534 |
+
License and any other pertinent obligations, then as a consequence you may
|
535 |
+
not convey it at all. For example, if you agree to terms that obligate you
|
536 |
+
to collect a royalty for further conveying from those to whom you convey
|
537 |
+
the Program, the only way you could satisfy both those terms and this
|
538 |
+
License would be to refrain entirely from conveying the Program.
|
539 |
+
|
540 |
+
13. Remote Network Interaction; Use with the GNU General Public License.
|
541 |
+
|
542 |
+
Notwithstanding any other provision of this License, if you modify the
|
543 |
+
Program, your modified version must prominently offer all users
|
544 |
+
interacting with it remotely through a computer network (if your version
|
545 |
+
supports such interaction) an opportunity to receive the Corresponding
|
546 |
+
Source of your version by providing access to the Corresponding Source
|
547 |
+
from a network server at no charge, through some standard or customary
|
548 |
+
means of facilitating copying of software. This Corresponding Source
|
549 |
+
shall include the Corresponding Source for any work covered by version 3
|
550 |
+
of the GNU General Public License that is incorporated pursuant to the
|
551 |
+
following paragraph.
|
552 |
+
|
553 |
+
Notwithstanding any other provision of this License, you have
|
554 |
+
permission to link or combine any covered work with a work licensed
|
555 |
+
under version 3 of the GNU General Public License into a single
|
556 |
+
combined work, and to convey the resulting work. The terms of this
|
557 |
+
License will continue to apply to the part which is the covered work,
|
558 |
+
but the work with which it is combined will remain governed by version
|
559 |
+
3 of the GNU General Public License.
|
560 |
+
|
561 |
+
14. Revised Versions of this License.
|
562 |
+
|
563 |
+
The Free Software Foundation may publish revised and/or new versions of
|
564 |
+
the GNU Affero General Public License from time to time. Such new versions
|
565 |
+
will be similar in spirit to the present version, but may differ in detail to
|
566 |
+
address new problems or concerns.
|
567 |
+
|
568 |
+
Each version is given a distinguishing version number. If the
|
569 |
+
Program specifies that a certain numbered version of the GNU Affero General
|
570 |
+
Public License "or any later version" applies to it, you have the
|
571 |
+
option of following the terms and conditions either of that numbered
|
572 |
+
version or of any later version published by the Free Software
|
573 |
+
Foundation. If the Program does not specify a version number of the
|
574 |
+
GNU Affero General Public License, you may choose any version ever published
|
575 |
+
by the Free Software Foundation.
|
576 |
+
|
577 |
+
If the Program specifies that a proxy can decide which future
|
578 |
+
versions of the GNU Affero General Public License can be used, that proxy's
|
579 |
+
public statement of acceptance of a version permanently authorizes you
|
580 |
+
to choose that version for the Program.
|
581 |
+
|
582 |
+
Later license versions may give you additional or different
|
583 |
+
permissions. However, no additional obligations are imposed on any
|
584 |
+
author or copyright holder as a result of your choosing to follow a
|
585 |
+
later version.
|
586 |
+
|
587 |
+
15. Disclaimer of Warranty.
|
588 |
+
|
589 |
+
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
590 |
+
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
591 |
+
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
592 |
+
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
593 |
+
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
594 |
+
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
595 |
+
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
596 |
+
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
597 |
+
|
598 |
+
16. Limitation of Liability.
|
599 |
+
|
600 |
+
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
601 |
+
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
602 |
+
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
603 |
+
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
604 |
+
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
605 |
+
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
606 |
+
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
607 |
+
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
608 |
+
SUCH DAMAGES.
|
609 |
+
|
610 |
+
17. Interpretation of Sections 15 and 16.
|
611 |
+
|
612 |
+
If the disclaimer of warranty and limitation of liability provided
|
613 |
+
above cannot be given local legal effect according to their terms,
|
614 |
+
reviewing courts shall apply local law that most closely approximates
|
615 |
+
an absolute waiver of all civil liability in connection with the
|
616 |
+
Program, unless a warranty or assumption of liability accompanies a
|
617 |
+
copy of the Program in return for a fee.
|
618 |
+
|
619 |
+
END OF TERMS AND CONDITIONS
|
620 |
+
|
621 |
+
How to Apply These Terms to Your New Programs
|
622 |
+
|
623 |
+
If you develop a new program, and you want it to be of the greatest
|
624 |
+
possible use to the public, the best way to achieve this is to make it
|
625 |
+
free software which everyone can redistribute and change under these terms.
|
626 |
+
|
627 |
+
To do so, attach the following notices to the program. It is safest
|
628 |
+
to attach them to the start of each source file to most effectively
|
629 |
+
state the exclusion of warranty; and each file should have at least
|
630 |
+
the "copyright" line and a pointer to where the full notice is found.
|
631 |
+
|
632 |
+
Swarms provides multi-agent orchestration mechanisms to enable llm agents to collaborate and work together
|
633 |
+
Copyright (C) <2025> <Kye Gomez Chairman of TGSC>
|
634 |
+
|
635 |
+
This program is free software: you can redistribute it and/or modify
|
636 |
+
it under the terms of the GNU Affero General Public License as published
|
637 |
+
by the Free Software Foundation, either version 3 of the License, or
|
638 |
+
(at your option) any later version.
|
639 |
+
|
640 |
+
This program is distributed in the hope that it will be useful,
|
641 |
+
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
642 |
+
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
643 |
+
GNU Affero General Public License for more details.
|
644 |
+
|
645 |
+
You should have received a copy of the GNU Affero General Public License
|
646 |
+
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
647 |
+
|
648 |
+
Also add information on how to contact you by electronic and paper mail.
|
649 |
+
|
650 |
+
If your software can interact with users remotely through a computer
|
651 |
+
network, you should also make sure that it provides a way for users to
|
652 |
+
get its source. For example, if your program is a web application, its
|
653 |
+
interface could display a "Source" link that leads users to an archive
|
654 |
+
of the code. There are many ways you could offer source, and different
|
655 |
+
solutions will be better for different programs; see section 13 for the
|
656 |
+
specific requirements.
|
657 |
+
|
658 |
+
You should also get your employer (if you work as a programmer) or school,
|
659 |
+
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
660 |
+
For more information on this, and how to apply and follow the GNU AGPL, see
|
661 |
+
<https://www.gnu.org/licenses/>.
|
README.md
CHANGED
@@ -1,10 +1,1906 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<a href="https://swarms.world">
|
3 |
+
<img src="https://github.com/kyegomez/swarms/blob/master/images/swarmslogobanner.png" style="margin: 15px; max-width: 300px" width="50%" alt="Logo">
|
4 |
+
</a>
|
5 |
+
</div>
|
6 |
+
<p align="center">
|
7 |
+
<em>The Enterprise-Grade Production-Ready Multi-Agent Orchestration Framework </em>
|
8 |
+
</p>
|
9 |
+
|
10 |
+
<p align="center">
|
11 |
+
<a href="https://pypi.org/project/swarms/" target="_blank">
|
12 |
+
<img alt="Python" src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" />
|
13 |
+
<img alt="Version" src="https://img.shields.io/pypi/v/swarms?style=for-the-badge&color=3670A0">
|
14 |
+
</a>
|
15 |
+
</p>
|
16 |
+
<p align="center">
|
17 |
+
<a href="https://twitter.com/swarms_corp/">🐦 Twitter</a>
|
18 |
+
<span> • </span>
|
19 |
+
<a href="https://discord.gg/agora-999382051935506503">📢 Discord</a>
|
20 |
+
<span> • </span>
|
21 |
+
<a href="https://swarms.world">Swarms Platform</a>
|
22 |
+
<span> • </span>
|
23 |
+
<a href="https://docs.swarms.world">📙 Documentation</a>
|
24 |
+
</p>
|
25 |
+
|
26 |
+
|
27 |
+
|
28 |
+
|
29 |
+
[![Join our Discord](https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/agora-999382051935506503) [![Subscribe on YouTube](https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white)](https://www.youtube.com/@kyegomez3242) [![Connect on LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/in/kye-g-38759a207/) [![Follow on X.com](https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white)](https://x.com/kyegomezb)
|
30 |
+
|
31 |
+
|
32 |
+
|
33 |
+
[![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms/issues) [![GitHub forks](https://img.shields.io/github/forks/kyegomez/swarms)](https://github.com/kyegomez/swarms/network) [![GitHub stars](https://img.shields.io/github/stars/kyegomez/swarms)](https://github.com/kyegomez/swarms/stargazers) [![GitHub license](https://img.shields.io/github/license/kyegomez/swarms)](https://github.com/kyegomez/swarms/blob/main/LICENSE)[![GitHub star chart](https://img.shields.io/github/stars/kyegomez/swarms?style=social)](https://star-history.com/#kyegomez/swarms)[![Dependency Status](https://img.shields.io/librariesio/github/kyegomez/swarms)](https://libraries.io/github/kyegomez/swarms) [![Downloads](https://static.pepy.tech/badge/swarms/month)](https://pepy.tech/project/swarms)
|
34 |
+
|
35 |
+
[![Share on Twitter](https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/swarms)](https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on Facebook](https://img.shields.io/badge/Share-%20facebook-blue)](https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms) [![Share on LinkedIn](https://img.shields.io/badge/Share-%20linkedin-blue)](https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=)
|
36 |
+
|
37 |
+
[![Share on Reddit](https://img.shields.io/badge/-Share%20on%20Reddit-orange)](https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI) [![Share on Hacker News](https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange)](https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI) [![Share on Pinterest](https://img.shields.io/badge/-Share%20on%20Pinterest-red)](https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI) [![Share on WhatsApp](https://img.shields.io/badge/-Share%20on%20WhatsApp-green)](https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms)
|
38 |
+
|
39 |
+
|
40 |
+
## ✨ Features
|
41 |
+
|
42 |
+
| Category | Features | Benefits |
|
43 |
+
|----------|----------|-----------|
|
44 |
+
| 🏢 Enterprise Architecture | • Production-Ready Infrastructure<br>• High Reliability Systems<br>• Modular Design<br>• Comprehensive Logging | • Reduced downtime<br>• Easier maintenance<br>• Better debugging<br>• Enhanced monitoring |
|
45 |
+
| 🤖 Agent Orchestration | • Hierarchical Swarms<br>• Parallel Processing<br>• Sequential Workflows<br>• Graph-based Workflows<br>• Dynamic Agent Rearrangement | • Complex task handling<br>• Improved performance<br>• Flexible workflows<br>• Optimized execution |
|
46 |
+
| 🔄 Integration Capabilities | • Multi-Model Support<br>• Custom Agent Creation<br>• Extensive Tool Library<br>• Multiple Memory Systems | • Provider flexibility<br>• Custom solutions<br>• Extended functionality<br>• Enhanced memory management |
|
47 |
+
| 📈 Scalability | • Concurrent Processing<br>• Resource Management<br>• Load Balancing<br>• Horizontal Scaling | • Higher throughput<br>• Efficient resource use<br>• Better performance<br>• Easy scaling |
|
48 |
+
| 🛠️ Developer Tools | • Simple API<br>• Extensive Documentation<br>• Active Community<br>• CLI Tools | • Faster development<br>• Easy learning curve<br>• Community support<br>• Quick deployment |
|
49 |
+
| 🔐 Security Features | • Error Handling<br>• Rate Limiting<br>• Monitoring Integration<br>• Audit Logging | • Improved reliability<br>• API protection<br>• Better monitoring<br>• Enhanced tracking |
|
50 |
+
| 📊 Advanced Features | • SpreadsheetSwarm<br>• Group Chat<br>• Agent Registry<br>• Mixture of Agents | • Mass agent management<br>• Collaborative AI<br>• Centralized control<br>• Complex solutions |
|
51 |
+
| 🔌 Provider Support | • OpenAI<br>• Anthropic<br>• ChromaDB<br>• Custom Providers | • Provider flexibility<br>• Storage options<br>• Custom integration<br>• Vendor independence |
|
52 |
+
| 💪 Production Features | • Automatic Retries<br>• Async Support<br>• Environment Management<br>• Type Safety | • Better reliability<br>• Improved performance<br>• Easy configuration<br>• Safer code |
|
53 |
+
| 🎯 Use Case Support | • Task-Specific Agents<br>• Custom Workflows<br>• Industry Solutions<br>• Extensible Framework | • Quick deployment<br>• Flexible solutions<br>• Industry readiness<br>• Easy customization |
|
54 |
+
|
55 |
+
|
56 |
+
----
|
57 |
+
|
58 |
+
## Requirements
|
59 |
+
- `python3.10` or above!
|
60 |
+
- `$ pip install -U swarms` And, don't forget to install swarms!
|
61 |
+
- `.env` file with API keys from your providers like `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`
|
62 |
+
- Set an `.env` Variable with your desired workspace dir: `WORKSPACE_DIR="agent_workspace"` or do it in your terminal with `export WORKSPACE_DIR="agent_workspace"`
|
63 |
+
- Finally, `swarms onboarding` to get you started.
|
64 |
+
|
65 |
+
## Guides and Walkthroughs
|
66 |
+
Refer to our documentation for production grade implementation details.
|
67 |
+
|
68 |
+
|
69 |
+
| Section | Links |
|
70 |
+
|----------------------|--------------------------------------------------------------------------------------------|
|
71 |
+
| Installation | [Installation](https://docs.swarms.world/en/latest/swarms/install/install/) |
|
72 |
+
| Quickstart | [Get Started](https://docs.swarms.world/en/latest/swarms/install/quickstart/) |
|
73 |
+
| Agent Internal Mechanisms | [Agent Architecture](https://docs.swarms.world/en/latest/swarms/framework/agents_explained/) |
|
74 |
+
| Agent API | [Agent API](https://docs.swarms.world/en/latest/swarms/structs/agent/) |
|
75 |
+
| Integrating External Agents Griptape, Autogen, etc | [Integrating External APIs](https://docs.swarms.world/en/latest/swarms/agents/external_party_agents/) |
|
76 |
+
| Creating Agents from YAML | [Creating Agents from YAML](https://docs.swarms.world/en/latest/swarms/agents/create_agents_yaml/) |
|
77 |
+
| Why You Need Swarms | [Why MultiAgent Collaboration is Necessary](https://docs.swarms.world/en/latest/swarms/concept/why/) |
|
78 |
+
| Swarm Architectures Analysis | [Swarm Architectures](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/) |
|
79 |
+
| Choosing the Right Swarm for Your Business Problem¶ | [CLICK HERE](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/) |
|
80 |
+
| AgentRearrange Docs| [CLICK HERE](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/) |
|
81 |
+
|
82 |
+
|
83 |
+
## Install 💻
|
84 |
+
Install the following packages with copy and paste
|
85 |
+
|
86 |
+
```bash
|
87 |
+
$ pip3 install -U swarms swarm-models swarms-memory
|
88 |
+
```
|
89 |
+
|
90 |
+
|
91 |
+
## Onboarding
|
92 |
+
|
93 |
+
Now that you have downloaded swarms with `pip3 install -U swarms`, we get access to the `CLI`. Get Onboarded with CLI Now with:
|
94 |
+
|
95 |
+
```bash
|
96 |
+
swarms onboarding
|
97 |
+
```
|
98 |
+
|
99 |
+
You can also run this command for help:
|
100 |
+
|
101 |
+
```bash
|
102 |
+
swarms help
|
103 |
+
```
|
104 |
+
|
105 |
+
For more documentation on the CLI [CLICK HERE](https://docs.swarms.world/en/latest/swarms/cli/main/)
|
106 |
+
|
107 |
---
|
108 |
+
|
109 |
+
# Usage Examples 🤖
|
110 |
+
Here are some example scripts to get you started. For more comprehensive documentation, visit our [docs](https://docs.swarms.world/en/latest/).
|
111 |
+
|
112 |
+
| Example Name | Description | Type of Examples | Link |
|
113 |
+
| --- | --- | --- | --- |
|
114 |
+
| Swarms Examples | A collection of simple examples to demonstrate Swarms capabilities. | Basic Usage | [https://github.com/The-Swarm-Corporation/swarms-examples?tab=readme-ov-file](https://github.com/The-Swarm-Corporation/swarms-examples?tab=readme-ov-file) |
|
115 |
+
| Cookbook | A comprehensive guide with recipes for various use cases and scenarios. | Advanced Usage | [https://github.com/The-Swarm-Corporation/Cookbook](https://github.com/The-Swarm-Corporation/Cookbook) |
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
+
|
120 |
---
|
121 |
|
122 |
+
## `Agent` Class
|
123 |
+
The `Agent` class is a fundamental component of the Swarms framework, designed to execute tasks autonomously. It fuses llms, tools and long-term memory capabilities to create a full stack agent. The `Agent` class is highly customizable, allowing for fine-grained control over its behavior and interactions.
|
124 |
+
|
125 |
+
|
126 |
+
### `run` Method
|
127 |
+
The `run` method is the primary entry point for executing tasks with an `Agent` instance. It accepts a task string as the main input task and processes it according to the agent's configuration. And, it can also accept an `img` parameter such as `img="image_filepath.png` to process images if you have a VLM attached such as `GPT4VisionAPI`
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
## Simple Example
|
132 |
+
|
133 |
+
```python
|
134 |
+
from swarms import Agent
|
135 |
+
|
136 |
+
agent = Agent(
|
137 |
+
agent_name="Stock-Analysis-Agent",
|
138 |
+
model_name="gpt-4o-mini",
|
139 |
+
max_loops="auto",
|
140 |
+
interactive=True,
|
141 |
+
streaming_on=True,
|
142 |
+
)
|
143 |
+
|
144 |
+
agent.run("What is the current market trend for tech stocks?")
|
145 |
+
|
146 |
+
```
|
147 |
+
|
148 |
+
### Settings and Customization
|
149 |
+
The `Agent` class offers a range of settings to tailor its behavior to specific needs. Some key settings include:
|
150 |
+
|
151 |
+
| Setting | Description | Default Value |
|
152 |
+
| --- | --- | --- |
|
153 |
+
| `agent_name` | The name of the agent. | "DefaultAgent" |
|
154 |
+
| `system_prompt` | The system prompt to use for the agent. | "Default system prompt." |
|
155 |
+
| `llm` | The language model to use for processing tasks. | `OpenAIChat` instance |
|
156 |
+
| `max_loops` | The maximum number of loops to execute for a task. | 1 |
|
157 |
+
| `autosave` | Enables or disables autosaving of the agent's state. | False |
|
158 |
+
| `dashboard` | Enables or disables the dashboard for the agent. | False |
|
159 |
+
| `verbose` | Controls the verbosity of the agent's output. | False |
|
160 |
+
| `dynamic_temperature_enabled` | Enables or disables dynamic temperature adjustment for the language model. | False |
|
161 |
+
| `saved_state_path` | The path to save the agent's state. | "agent_state.json" |
|
162 |
+
| `user_name` | The username associated with the agent. | "default_user" |
|
163 |
+
| `retry_attempts` | The number of retry attempts for failed tasks. | 1 |
|
164 |
+
| `context_length` | The maximum length of the context to consider for tasks. | 200000 |
|
165 |
+
| `return_step_meta` | Controls whether to return step metadata in the output. | False |
|
166 |
+
| `output_type` | The type of output to return (e.g., "json", "string"). | "string" |
|
167 |
+
|
168 |
+
|
169 |
+
```python
|
170 |
+
import os
|
171 |
+
from swarms import Agent
|
172 |
+
|
173 |
+
from swarms.prompts.finance_agent_sys_prompt import (
|
174 |
+
FINANCIAL_AGENT_SYS_PROMPT,
|
175 |
+
)
|
176 |
+
# Initialize the agent
|
177 |
+
agent = Agent(
|
178 |
+
agent_name="Financial-Analysis-Agent",
|
179 |
+
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
180 |
+
model_name="gpt-4o-mini",
|
181 |
+
max_loops=1,
|
182 |
+
autosave=True,
|
183 |
+
dashboard=False,
|
184 |
+
verbose=True,
|
185 |
+
dynamic_temperature_enabled=True,
|
186 |
+
saved_state_path="finance_agent.json",
|
187 |
+
user_name="swarms_corp",
|
188 |
+
retry_attempts=1,
|
189 |
+
context_length=200000,
|
190 |
+
return_step_meta=False,
|
191 |
+
output_type="string",
|
192 |
+
streaming_on=False,
|
193 |
+
)
|
194 |
+
|
195 |
+
|
196 |
+
agent.run(
|
197 |
+
"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria"
|
198 |
+
)
|
199 |
+
|
200 |
+
```
|
201 |
+
-----
|
202 |
+
|
203 |
+
### Integrating RAG with Swarms for Enhanced Long-Term Memory
|
204 |
+
`Agent` equipped with quasi-infinite long term memory using RAG (Relational Agent Graph) for advanced document understanding, analysis, and retrieval capabilities.
|
205 |
+
|
206 |
+
**Mermaid Diagram for RAG Integration**
|
207 |
+
```mermaid
|
208 |
+
graph TD
|
209 |
+
A[Initialize Agent with RAG] --> B[Receive Task]
|
210 |
+
B --> C[Query Long-Term Memory]
|
211 |
+
C --> D[Process Task with Context]
|
212 |
+
D --> E[Generate Response]
|
213 |
+
E --> F[Update Long-Term Memory]
|
214 |
+
F --> G[Return Output]
|
215 |
+
```
|
216 |
+
|
217 |
+
```python
|
218 |
+
from swarms import Agent
|
219 |
+
from swarms.prompts.finance_agent_sys_prompt import (
|
220 |
+
FINANCIAL_AGENT_SYS_PROMPT,
|
221 |
+
)
|
222 |
+
import os
|
223 |
+
|
224 |
+
from swarms_memory import ChromaDB
|
225 |
+
|
226 |
+
# Initialize the ChromaDB client for long-term memory management
|
227 |
+
chromadb = ChromaDB(
|
228 |
+
metric="cosine", # Metric for similarity measurement
|
229 |
+
output_dir="finance_agent_rag", # Directory for storing RAG data
|
230 |
+
# docs_folder="artifacts", # Uncomment and specify the folder containing your documents
|
231 |
+
)
|
232 |
+
|
233 |
+
# Initialize the agent with RAG capabilities
|
234 |
+
agent = Agent(
|
235 |
+
agent_name="Financial-Analysis-Agent",
|
236 |
+
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
|
237 |
+
agent_description="Agent creates a comprehensive financial analysis",
|
238 |
+
model_name="gpt-4o-mini",
|
239 |
+
max_loops="auto", # Auto-adjusts loops based on task complexity
|
240 |
+
autosave=True, # Automatically saves agent state
|
241 |
+
dashboard=False, # Disables dashboard for this example
|
242 |
+
verbose=True, # Enables verbose mode for detailed output
|
243 |
+
streaming_on=True, # Enables streaming for real-time processing
|
244 |
+
dynamic_temperature_enabled=True, # Dynamically adjusts temperature for optimal performance
|
245 |
+
saved_state_path="finance_agent.json", # Path to save agent state
|
246 |
+
user_name="swarms_corp", # User name for the agent
|
247 |
+
retry_attempts=3, # Number of retry attempts for failed tasks
|
248 |
+
context_length=200000, # Maximum length of the context to consider
|
249 |
+
long_term_memory=chromadb, # Integrates ChromaDB for long-term memory management
|
250 |
+
return_step_meta=False,
|
251 |
+
output_type="string",
|
252 |
+
)
|
253 |
+
|
254 |
+
# Run the agent with a sample task
|
255 |
+
agent.run(
|
256 |
+
"What are the components of a startups stock incentive equity plan"
|
257 |
+
)
|
258 |
+
```
|
259 |
+
|
260 |
+
|
261 |
+
-------
|
262 |
+
|
263 |
+
### Misc Agent Settings
|
264 |
+
We provide vast array of features to save agent states using json, yaml, toml, upload pdfs, batched jobs, and much more!
|
265 |
+
|
266 |
+
|
267 |
+
**Method Table**
|
268 |
+
|
269 |
+
| Method | Description |
|
270 |
+
| --- | --- |
|
271 |
+
| `to_dict()` | Converts the agent object to a dictionary. |
|
272 |
+
| `to_toml()` | Converts the agent object to a TOML string. |
|
273 |
+
| `model_dump_json()` | Dumps the model to a JSON file. |
|
274 |
+
| `model_dump_yaml()` | Dumps the model to a YAML file. |
|
275 |
+
| `ingest_docs()` | Ingests documents into the agent's knowledge base. |
|
276 |
+
| `receive_message()` | Receives a message from a user and processes it. |
|
277 |
+
| `send_agent_message()` | Sends a message from the agent to a user. |
|
278 |
+
| `filtered_run()` | Runs the agent with a filtered system prompt. |
|
279 |
+
| `bulk_run()` | Runs the agent with multiple system prompts. |
|
280 |
+
| `add_memory()` | Adds a memory to the agent. |
|
281 |
+
| `check_available_tokens()` | Checks the number of available tokens for the agent. |
|
282 |
+
| `tokens_checks()` | Performs token checks for the agent. |
|
283 |
+
| `print_dashboard()` | Prints the dashboard of the agent. |
|
284 |
+
| `get_docs_from_doc_folders()` | Fetches all the documents from the doc folders. |
|
285 |
+
| `activate_agentops()` | Activates agent operations. |
|
286 |
+
| `check_end_session_agentops()` | Checks the end of the session for agent operations. |
|
287 |
+
|
288 |
+
|
289 |
+
|
290 |
+
```python
|
291 |
+
# # Convert the agent object to a dictionary
|
292 |
+
print(agent.to_dict())
|
293 |
+
print(agent.to_toml())
|
294 |
+
print(agent.model_dump_json())
|
295 |
+
print(agent.model_dump_yaml())
|
296 |
+
|
297 |
+
# Ingest documents into the agent's knowledge base
|
298 |
+
agent.ingest_docs("your_pdf_path.pdf")
|
299 |
+
|
300 |
+
# Receive a message from a user and process it
|
301 |
+
agent.receive_message(name="agent_name", message="message")
|
302 |
+
|
303 |
+
# Send a message from the agent to a user
|
304 |
+
agent.send_agent_message(agent_name="agent_name", message="message")
|
305 |
+
|
306 |
+
# Ingest multiple documents into the agent's knowledge base
|
307 |
+
agent.ingest_docs("your_pdf_path.pdf", "your_csv_path.csv")
|
308 |
+
|
309 |
+
# Run the agent with a filtered system prompt
|
310 |
+
agent.filtered_run(
|
311 |
+
"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?"
|
312 |
+
)
|
313 |
+
|
314 |
+
# Run the agent with multiple system prompts
|
315 |
+
agent.bulk_run(
|
316 |
+
[
|
317 |
+
"How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?",
|
318 |
+
"Another system prompt",
|
319 |
+
]
|
320 |
+
)
|
321 |
+
|
322 |
+
# Add a memory to the agent
|
323 |
+
agent.add_memory("Add a memory to the agent")
|
324 |
+
|
325 |
+
# Check the number of available tokens for the agent
|
326 |
+
agent.check_available_tokens()
|
327 |
+
|
328 |
+
# Perform token checks for the agent
|
329 |
+
agent.tokens_checks()
|
330 |
+
|
331 |
+
# Print the dashboard of the agent
|
332 |
+
agent.print_dashboard()
|
333 |
+
|
334 |
+
# Fetch all the documents from the doc folders
|
335 |
+
agent.get_docs_from_doc_folders()
|
336 |
+
|
337 |
+
# Activate agent ops
|
338 |
+
agent.activate_agentops()
|
339 |
+
agent.check_end_session_agentops()
|
340 |
+
|
341 |
+
# Dump the model to a JSON file
|
342 |
+
agent.model_dump_json()
|
343 |
+
print(agent.to_toml())
|
344 |
+
|
345 |
+
```
|
346 |
+
|
347 |
+
|
348 |
+
|
349 |
+
### `Agent`with Pydantic BaseModel as Output Type
|
350 |
+
The following is an example of an agent that intakes a pydantic basemodel and outputs it at the same time:
|
351 |
+
|
352 |
+
```python
|
353 |
+
from pydantic import BaseModel, Field
|
354 |
+
from swarms import Agent
|
355 |
+
|
356 |
+
|
357 |
+
# Initialize the schema for the person's information
|
358 |
+
class Schema(BaseModel):
|
359 |
+
name: str = Field(..., title="Name of the person")
|
360 |
+
agent: int = Field(..., title="Age of the person")
|
361 |
+
is_student: bool = Field(..., title="Whether the person is a student")
|
362 |
+
courses: list[str] = Field(
|
363 |
+
..., title="List of courses the person is taking"
|
364 |
+
)
|
365 |
+
|
366 |
+
|
367 |
+
# Convert the schema to a JSON string
|
368 |
+
tool_schema = Schema(
|
369 |
+
name="Tool Name",
|
370 |
+
agent=1,
|
371 |
+
is_student=True,
|
372 |
+
courses=["Course1", "Course2"],
|
373 |
+
)
|
374 |
+
|
375 |
+
# Define the task to generate a person's information
|
376 |
+
task = "Generate a person's information based on the following schema:"
|
377 |
+
|
378 |
+
# Initialize the agent
|
379 |
+
agent = Agent(
|
380 |
+
agent_name="Person Information Generator",
|
381 |
+
system_prompt=(
|
382 |
+
"Generate a person's information based on the following schema:"
|
383 |
+
),
|
384 |
+
# Set the tool schema to the JSON string -- this is the key difference
|
385 |
+
tool_schema=tool_schema,
|
386 |
+
model_name="gpt-4o",
|
387 |
+
max_loops=3,
|
388 |
+
autosave=True,
|
389 |
+
dashboard=False,
|
390 |
+
streaming_on=True,
|
391 |
+
verbose=True,
|
392 |
+
interactive=True,
|
393 |
+
# Set the output type to the tool schema which is a BaseModel
|
394 |
+
output_type=tool_schema, # or dict, or str
|
395 |
+
metadata_output_type="json",
|
396 |
+
# List of schemas that the agent can handle
|
397 |
+
list_base_models=[tool_schema],
|
398 |
+
function_calling_format_type="OpenAI",
|
399 |
+
function_calling_type="json", # or soon yaml
|
400 |
+
)
|
401 |
+
|
402 |
+
# Run the agent to generate the person's information
|
403 |
+
generated_data = agent.run(task)
|
404 |
+
|
405 |
+
# Print the generated data
|
406 |
+
print(f"Generated data: {generated_data}")
|
407 |
+
|
408 |
+
|
409 |
+
```
|
410 |
+
|
411 |
+
### Multi Modal Autonomous Agent
|
412 |
+
Run the agent with multiple modalities useful for various real-world tasks in manufacturing, logistics, and health.
|
413 |
+
|
414 |
+
```python
|
415 |
+
import os
|
416 |
+
from dotenv import load_dotenv
|
417 |
+
from swarms import Agent
|
418 |
+
|
419 |
+
from swarm_models import GPT4VisionAPI
|
420 |
+
|
421 |
+
# Load the environment variables
|
422 |
+
load_dotenv()
|
423 |
+
|
424 |
+
|
425 |
+
# Initialize the language model
|
426 |
+
llm = GPT4VisionAPI(
|
427 |
+
openai_api_key=os.environ.get("OPENAI_API_KEY"),
|
428 |
+
max_tokens=500,
|
429 |
+
)
|
430 |
+
|
431 |
+
# Initialize the task
|
432 |
+
task = (
|
433 |
+
"Analyze this image of an assembly line and identify any issues such as"
|
434 |
+
" misaligned parts, defects, or deviations from the standard assembly"
|
435 |
+
" process. IF there is anything unsafe in the image, explain why it is"
|
436 |
+
" unsafe and how it could be improved."
|
437 |
+
)
|
438 |
+
img = "assembly_line.jpg"
|
439 |
+
|
440 |
+
## Initialize the workflow
|
441 |
+
agent = Agent(
|
442 |
+
agent_name = "Multi-ModalAgent",
|
443 |
+
llm=llm,
|
444 |
+
max_loops="auto",
|
445 |
+
autosave=True,
|
446 |
+
dashboard=True,
|
447 |
+
multi_modal=True
|
448 |
+
)
|
449 |
+
|
450 |
+
# Run the workflow on a task
|
451 |
+
agent.run(task, img)
|
452 |
+
```
|
453 |
+
----
|
454 |
+
|
455 |
+
|
456 |
+
### Local Agent `ToolAgent`
|
457 |
+
ToolAgent is an fully local agent that can use tools through JSON function calling. It intakes any open source model from huggingface and is extremely modular and plug in and play. We need help adding general support to all models soon.
|
458 |
+
|
459 |
+
|
460 |
+
```python
|
461 |
+
from pydantic import BaseModel, Field
|
462 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
463 |
+
|
464 |
+
from swarms import ToolAgent
|
465 |
+
from swarms.tools.json_utils import base_model_to_json
|
466 |
+
|
467 |
+
# Load the pre-trained model and tokenizer
|
468 |
+
model = AutoModelForCausalLM.from_pretrained(
|
469 |
+
"databricks/dolly-v2-12b",
|
470 |
+
load_in_4bit=True,
|
471 |
+
device_map="auto",
|
472 |
+
)
|
473 |
+
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
|
474 |
+
|
475 |
+
|
476 |
+
# Initialize the schema for the person's information
|
477 |
+
class Schema(BaseModel):
|
478 |
+
name: str = Field(..., title="Name of the person")
|
479 |
+
agent: int = Field(..., title="Age of the person")
|
480 |
+
is_student: bool = Field(
|
481 |
+
..., title="Whether the person is a student"
|
482 |
+
)
|
483 |
+
courses: list[str] = Field(
|
484 |
+
..., title="List of courses the person is taking"
|
485 |
+
)
|
486 |
+
|
487 |
+
|
488 |
+
# Convert the schema to a JSON string
|
489 |
+
tool_schema = base_model_to_json(Schema)
|
490 |
+
|
491 |
+
# Define the task to generate a person's information
|
492 |
+
task = (
|
493 |
+
"Generate a person's information based on the following schema:"
|
494 |
+
)
|
495 |
+
|
496 |
+
# Create an instance of the ToolAgent class
|
497 |
+
agent = ToolAgent(
|
498 |
+
name="dolly-function-agent",
|
499 |
+
description="Ana gent to create a child data",
|
500 |
+
model=model,
|
501 |
+
tokenizer=tokenizer,
|
502 |
+
json_schema=tool_schema,
|
503 |
+
)
|
504 |
+
|
505 |
+
# Run the agent to generate the person's information
|
506 |
+
generated_data = agent.run(task)
|
507 |
+
|
508 |
+
# Print the generated data
|
509 |
+
print(f"Generated data: {generated_data}")
|
510 |
+
|
511 |
+
```
|
512 |
+
|
513 |
+
|
514 |
+
## Understanding Swarms
|
515 |
+
|
516 |
+
A swarm refers to a group of more than two agents working collaboratively to achieve a common goal. These agents can be software entities, such as llms that interact with each other to perform complex tasks. The concept of a swarm is inspired by natural systems like ant colonies or bird flocks, where simple individual behaviors lead to complex group dynamics and problem-solving capabilities.
|
517 |
+
|
518 |
+
### How Swarm Architectures Facilitate Communication
|
519 |
+
|
520 |
+
Swarm architectures are designed to establish and manage communication between agents within a swarm. These architectures define how agents interact, share information, and coordinate their actions to achieve the desired outcomes. Here are some key aspects of swarm architectures:
|
521 |
+
|
522 |
+
1. **Hierarchical Communication**: In hierarchical swarms, communication flows from higher-level agents to lower-level agents. Higher-level agents act as coordinators, distributing tasks and aggregating results. This structure is efficient for tasks that require top-down control and decision-making.
|
523 |
+
|
524 |
+
2. **Parallel Communication**: In parallel swarms, agents operate independently and communicate with each other as needed. This architecture is suitable for tasks that can be processed concurrently without dependencies, allowing for faster execution and scalability.
|
525 |
+
|
526 |
+
3. **Sequential Communication**: Sequential swarms process tasks in a linear order, where each agent's output becomes the input for the next agent. This ensures that tasks with dependencies are handled in the correct sequence, maintaining the integrity of the workflow.
|
527 |
+
|
528 |
+
|
529 |
+
Swarm architectures leverage these communication patterns to ensure that agents work together efficiently, adapting to the specific requirements of the task at hand. By defining clear communication protocols and interaction models, swarm architectures enable the seamless orchestration of multiple agents, leading to enhanced performance and problem-solving capabilities.
|
530 |
+
|
531 |
+
|
532 |
+
| **Name** | **Description** | **Code Link** | **Use Cases** |
|
533 |
+
|-------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------|
|
534 |
+
| Hierarchical Swarms | A system where agents are organized in a hierarchy, with higher-level agents coordinating lower-level agents to achieve complex tasks. | [Code Link](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/#hierarchical-swarm) | Manufacturing process optimization, multi-level sales management, healthcare resource coordination |
|
535 |
+
| Agent Rearrange | A setup where agents rearrange themselves dynamically based on the task requirements and environmental conditions. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/) | Adaptive manufacturing lines, dynamic sales territory realignment, flexible healthcare staffing |
|
536 |
+
| Concurrent Workflows | Agents perform different tasks simultaneously, coordinating to complete a larger goal. | [Code Link](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/#concurrent-workflows) | Concurrent production lines, parallel sales operations, simultaneous patient care processes |
|
537 |
+
| Sequential Coordination | Agents perform tasks in a specific sequence, where the completion of one task triggers the start of the next. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/sequential_workflow/) | Step-by-step assembly lines, sequential sales processes, stepwise patient treatment workflows |
|
538 |
+
| Parallel Processing | Agents work on different parts of a task simultaneously to speed up the overall process. | [Code Link](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/#parallel-processing) | Parallel data processing in manufacturing, simultaneous sales analytics, concurrent medical tests |
|
539 |
+
| Mixture of Agents | A heterogeneous swarm where agents with different capabilities are combined to solve complex problems. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/moa/) | Financial forecasting, complex problem-solving requiring diverse skills |
|
540 |
+
| Graph Workflow | Agents collaborate in a directed acyclic graph (DAG) format to manage dependencies and parallel tasks. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/graph_workflow/) | AI-driven software development pipelines, complex project management |
|
541 |
+
| Group Chat | Agents engage in a chat-like interaction to reach decisions collaboratively. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/group_chat/) | Real-time collaborative decision-making, contract negotiations |
|
542 |
+
| Agent Registry | A centralized registry where agents are stored, retrieved, and invoked dynamically. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/agent_registry/) | Dynamic agent management, evolving recommendation engines |
|
543 |
+
| Spreadsheet Swarm | Manages tasks at scale, tracking agent outputs in a structured format like CSV files. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/spreadsheet_swarm/) | Large-scale marketing analytics, financial audits |
|
544 |
+
| Forest Swarm | A swarm structure that organizes agents in a tree-like hierarchy for complex decision-making processes. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/forest_swarm/) | Multi-stage workflows, hierarchical reinforcement learning |
|
545 |
+
| Swarm Router | Routes and chooses the swarm architecture based on the task requirements and available agents. | [Code Link](https://docs.swarms.world/en/latest/swarms/structs/swarm_router/) | Dynamic task routing, adaptive swarm architecture selection, optimized agent allocation |
|
546 |
+
|
547 |
+
|
548 |
+
|
549 |
+
### `SequentialWorkflow`
|
550 |
+
Sequential Workflow enables you to sequentially execute tasks with `Agent` and then pass the output into the next agent and onwards until you have specified your max loops.
|
551 |
+
|
552 |
+
```mermaid
|
553 |
+
graph LR
|
554 |
+
A[Agent 1] --> B[Agent 2]
|
555 |
+
B --> C[Agent 3]
|
556 |
+
C --> D[Agent 4]
|
557 |
+
D --> E[Max Loops]
|
558 |
+
E --> F[End]
|
559 |
+
```
|
560 |
+
|
561 |
+
|
562 |
+
|
563 |
+
### Methods
|
564 |
+
|
565 |
+
| Method | Description | Parameters | Return Value |
|
566 |
+
|--------|-------------|------------|--------------|
|
567 |
+
| `__init__` | Initialize the SequentialWorkflow | `agents`: List of Agent objects<br>`max_loops`: Maximum number of iterations<br>`verbose`: Boolean for verbose output | None |
|
568 |
+
| `run` | Execute the workflow | `input_data`: Initial input for the first agent | Final output after all agents have processed |
|
569 |
+
|
570 |
+
### Inputs
|
571 |
+
|
572 |
+
| Input | Type | Description |
|
573 |
+
|-------|------|-------------|
|
574 |
+
| `agents` | List[Agent] | List of Agent objects to be executed sequentially |
|
575 |
+
| `max_loops` | int | Maximum number of times the entire sequence will be repeated |
|
576 |
+
| `verbose` | bool | If True, print detailed information during execution |
|
577 |
+
|
578 |
+
### Output
|
579 |
+
|
580 |
+
The `run` method returns the final output after all agents have processed the input sequentially.
|
581 |
+
|
582 |
+
In this example, each `Agent` represents a task that is executed sequentially. The output of each agent is passed to the next agent in the sequence until the maximum number of loops is reached. This workflow is particularly useful for tasks that require a series of steps to be executed in a specific order, such as data processing pipelines or complex calculations that rely on the output of previous steps.
|
583 |
+
|
584 |
+
|
585 |
+
```python
|
586 |
+
import os
|
587 |
+
from swarms import Agent, SequentialWorkflow
|
588 |
+
from swarm_models import OpenAIChat
|
589 |
+
|
590 |
+
# model = Anthropic(anthropic_api_key=os.getenv("ANTHROPIC_API_KEY"))
|
591 |
+
company = "Nvidia"
|
592 |
+
# Get the OpenAI API key from the environment variable
|
593 |
+
api_key = os.getenv("GROQ_API_KEY")
|
594 |
+
|
595 |
+
# Model
|
596 |
+
model = OpenAIChat(
|
597 |
+
openai_api_base="https://api.groq.com/openai/v1",
|
598 |
+
openai_api_key=api_key,
|
599 |
+
model_name="llama-3.1-70b-versatile",
|
600 |
+
temperature=0.1,
|
601 |
+
)
|
602 |
+
|
603 |
+
|
604 |
+
# Initialize the Managing Director agent
|
605 |
+
managing_director = Agent(
|
606 |
+
agent_name="Managing-Director",
|
607 |
+
system_prompt=f"""
|
608 |
+
As the Managing Director at Blackstone, your role is to oversee the entire investment analysis process for potential acquisitions.
|
609 |
+
Your responsibilities include:
|
610 |
+
1. Setting the overall strategy and direction for the analysis
|
611 |
+
2. Coordinating the efforts of the various team members and ensuring a comprehensive evaluation
|
612 |
+
3. Reviewing the findings and recommendations from each team member
|
613 |
+
4. Making the final decision on whether to proceed with the acquisition
|
614 |
+
|
615 |
+
For the current potential acquisition of {company}, direct the tasks for the team to thoroughly analyze all aspects of the company, including its financials, industry position, technology, market potential, and regulatory compliance. Provide guidance and feedback as needed to ensure a rigorous and unbiased assessment.
|
616 |
+
""",
|
617 |
+
llm=model,
|
618 |
+
max_loops=1,
|
619 |
+
dashboard=False,
|
620 |
+
streaming_on=True,
|
621 |
+
verbose=True,
|
622 |
+
stopping_token="<DONE>",
|
623 |
+
state_save_file_type="json",
|
624 |
+
saved_state_path="managing-director.json",
|
625 |
+
)
|
626 |
+
|
627 |
+
# Initialize the Vice President of Finance
|
628 |
+
vp_finance = Agent(
|
629 |
+
agent_name="VP-Finance",
|
630 |
+
system_prompt=f"""
|
631 |
+
As the Vice President of Finance at Blackstone, your role is to lead the financial analysis of potential acquisitions.
|
632 |
+
For the current potential acquisition of {company}, your tasks include:
|
633 |
+
1. Conducting a thorough review of {company}' financial statements, including income statements, balance sheets, and cash flow statements
|
634 |
+
2. Analyzing key financial metrics such as revenue growth, profitability margins, liquidity ratios, and debt levels
|
635 |
+
3. Assessing the company's historical financial performance and projecting future performance based on assumptions and market conditions
|
636 |
+
4. Identifying any financial risks or red flags that could impact the acquisition decision
|
637 |
+
5. Providing a detailed report on your findings and recommendations to the Managing Director
|
638 |
+
|
639 |
+
Be sure to consider factors such as the sustainability of {company}' business model, the strength of its customer base, and its ability to generate consistent cash flows. Your analysis should be data-driven, objective, and aligned with Blackstone's investment criteria.
|
640 |
+
""",
|
641 |
+
llm=model,
|
642 |
+
max_loops=1,
|
643 |
+
dashboard=False,
|
644 |
+
streaming_on=True,
|
645 |
+
verbose=True,
|
646 |
+
stopping_token="<DONE>",
|
647 |
+
state_save_file_type="json",
|
648 |
+
saved_state_path="vp-finance.json",
|
649 |
+
)
|
650 |
+
|
651 |
+
# Initialize the Industry Analyst
|
652 |
+
industry_analyst = Agent(
|
653 |
+
agent_name="Industry-Analyst",
|
654 |
+
system_prompt=f"""
|
655 |
+
As the Industry Analyst at Blackstone, your role is to provide in-depth research and analysis on the industries and markets relevant to potential acquisitions.
|
656 |
+
For the current potential acquisition of {company}, your tasks include:
|
657 |
+
1. Conducting a comprehensive analysis of the industrial robotics and automation solutions industry, including market size, growth rates, key trends, and future prospects
|
658 |
+
2. Identifying the major players in the industry and assessing their market share, competitive strengths and weaknesses, and strategic positioning
|
659 |
+
3. Evaluating {company}' competitive position within the industry, including its market share, differentiation, and competitive advantages
|
660 |
+
4. Analyzing the key drivers and restraints for the industry, such as technological advancements, labor costs, regulatory changes, and economic conditions
|
661 |
+
5. Identifying potential risks and opportunities for {company} based on the industry analysis, such as disruptive technologies, emerging markets, or shifts in customer preferences
|
662 |
+
|
663 |
+
Your analysis should provide a clear and objective assessment of the attractiveness and future potential of the industrial robotics industry, as well as {company}' positioning within it. Consider both short-term and long-term factors, and provide evidence-based insights to inform the investment decision.
|
664 |
+
""",
|
665 |
+
llm=model,
|
666 |
+
max_loops=1,
|
667 |
+
dashboard=False,
|
668 |
+
streaming_on=True,
|
669 |
+
verbose=True,
|
670 |
+
stopping_token="<DONE>",
|
671 |
+
state_save_file_type="json",
|
672 |
+
saved_state_path="industry-analyst.json",
|
673 |
+
)
|
674 |
+
|
675 |
+
# Initialize the Technology Expert
|
676 |
+
tech_expert = Agent(
|
677 |
+
agent_name="Tech-Expert",
|
678 |
+
system_prompt=f"""
|
679 |
+
As the Technology Expert at Blackstone, your role is to assess the technological capabilities, competitive advantages, and potential risks of companies being considered for acquisition.
|
680 |
+
For the current potential acquisition of {company}, your tasks include:
|
681 |
+
1. Conducting a deep dive into {company}' proprietary technologies, including its robotics platforms, automation software, and AI capabilities
|
682 |
+
2. Assessing the uniqueness, scalability, and defensibility of {company}' technology stack and intellectual property
|
683 |
+
3. Comparing {company}' technologies to those of its competitors and identifying any key differentiators or technology gaps
|
684 |
+
4. Evaluating {company}' research and development capabilities, including its innovation pipeline, engineering talent, and R&D investments
|
685 |
+
5. Identifying any potential technology risks or disruptive threats that could impact {company}' long-term competitiveness, such as emerging technologies or expiring patents
|
686 |
+
|
687 |
+
Your analysis should provide a comprehensive assessment of {company}' technological strengths and weaknesses, as well as the sustainability of its competitive advantages. Consider both the current state of its technology and its future potential in light of industry trends and advancements.
|
688 |
+
""",
|
689 |
+
llm=model,
|
690 |
+
max_loops=1,
|
691 |
+
dashboard=False,
|
692 |
+
streaming_on=True,
|
693 |
+
verbose=True,
|
694 |
+
stopping_token="<DONE>",
|
695 |
+
state_save_file_type="json",
|
696 |
+
saved_state_path="tech-expert.json",
|
697 |
+
)
|
698 |
+
|
699 |
+
# Initialize the Market Researcher
|
700 |
+
market_researcher = Agent(
|
701 |
+
agent_name="Market-Researcher",
|
702 |
+
system_prompt=f"""
|
703 |
+
As the Market Researcher at Blackstone, your role is to analyze the target company's customer base, market share, and growth potential to assess the commercial viability and attractiveness of the potential acquisition.
|
704 |
+
For the current potential acquisition of {company}, your tasks include:
|
705 |
+
1. Analyzing {company}' current customer base, including customer segmentation, concentration risk, and retention rates
|
706 |
+
2. Assessing {company}' market share within its target markets and identifying key factors driving its market position
|
707 |
+
3. Conducting a detailed market sizing and segmentation analysis for the industrial robotics and automation markets, including identifying high-growth segments and emerging opportunities
|
708 |
+
4. Evaluating the demand drivers and sales cycles for {company}' products and services, and identifying any potential risks or limitations to adoption
|
709 |
+
5. Developing financial projections and estimates for {company}' revenue growth potential based on the market analysis and assumptions around market share and penetration
|
710 |
+
|
711 |
+
Your analysis should provide a data-driven assessment of the market opportunity for {company} and the feasibility of achieving our investment return targets. Consider both bottom-up and top-down market perspectives, and identify any key sensitivities or assumptions in your projections.
|
712 |
+
""",
|
713 |
+
llm=model,
|
714 |
+
max_loops=1,
|
715 |
+
dashboard=False,
|
716 |
+
streaming_on=True,
|
717 |
+
verbose=True,
|
718 |
+
stopping_token="<DONE>",
|
719 |
+
state_save_file_type="json",
|
720 |
+
saved_state_path="market-researcher.json",
|
721 |
+
)
|
722 |
+
|
723 |
+
# Initialize the Regulatory Specialist
|
724 |
+
regulatory_specialist = Agent(
|
725 |
+
agent_name="Regulatory-Specialist",
|
726 |
+
system_prompt=f"""
|
727 |
+
As the Regulatory Specialist at Blackstone, your role is to identify and assess any regulatory risks, compliance requirements, and potential legal liabilities associated with potential acquisitions.
|
728 |
+
For the current potential acquisition of {company}, your tasks include:
|
729 |
+
1. Identifying all relevant regulatory bodies and laws that govern the operations of {company}, including industry-specific regulations, labor laws, and environmental regulations
|
730 |
+
2. Reviewing {company}' current compliance policies, procedures, and track record to identify any potential gaps or areas of non-compliance
|
731 |
+
3. Assessing the potential impact of any pending or proposed changes to relevant regulations that could affect {company}' business or create additional compliance burdens
|
732 |
+
4. Evaluating the potential legal liabilities and risks associated with {company}' products, services, and operations, including product liability, intellectual property, and customer contracts
|
733 |
+
5. Providing recommendations on any regulatory or legal due diligence steps that should be taken as part of the acquisition process, as well as any post-acquisition integration considerations
|
734 |
+
|
735 |
+
Your analysis should provide a comprehensive assessment of the regulatory and legal landscape surrounding {company}, and identify any material risks or potential deal-breakers. Consider both the current state and future outlook, and provide practical recommendations to mitigate identified risks.
|
736 |
+
""",
|
737 |
+
llm=model,
|
738 |
+
max_loops=1,
|
739 |
+
dashboard=False,
|
740 |
+
streaming_on=True,
|
741 |
+
verbose=True,
|
742 |
+
stopping_token="<DONE>",
|
743 |
+
state_save_file_type="json",
|
744 |
+
saved_state_path="regulatory-specialist.json",
|
745 |
+
)
|
746 |
+
|
747 |
+
# Create a list of agents
|
748 |
+
agents = [
|
749 |
+
managing_director,
|
750 |
+
vp_finance,
|
751 |
+
industry_analyst,
|
752 |
+
tech_expert,
|
753 |
+
market_researcher,
|
754 |
+
regulatory_specialist,
|
755 |
+
]
|
756 |
+
|
757 |
+
|
758 |
+
swarm = SequentialWorkflow(
|
759 |
+
name="blackstone-private-equity-advisors",
|
760 |
+
agents=agents,
|
761 |
+
)
|
762 |
+
|
763 |
+
print(
|
764 |
+
swarm.run(
|
765 |
+
"Analyze nvidia if it's a good deal to invest in now 10B"
|
766 |
+
)
|
767 |
+
)
|
768 |
+
|
769 |
+
```
|
770 |
+
|
771 |
+
------
|
772 |
+
|
773 |
+
## `AgentRearrange`
|
774 |
+
|
775 |
+
The `AgentRearrange` orchestration technique, inspired by Einops and einsum, allows you to define and map out the relationships between various agents. It provides a powerful tool for orchestrating complex workflows, enabling you to specify linear and sequential relationships such as `a -> a1 -> a2 -> a3`, or concurrent relationships where the first agent sends a message to 3 agents simultaneously: `a -> a1, a2, a3`. This level of customization allows for the creation of highly efficient and dynamic workflows, where agents can work in parallel or in sequence as needed. The `AgentRearrange` technique is a valuable addition to the swarms library, providing a new level of flexibility and control over the orchestration of agents. For more detailed information and examples, please refer to the [official documentation](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/).
|
776 |
+
|
777 |
+
[Check out my video on agent rearrange!](https://youtu.be/Rq8wWQ073mg)
|
778 |
+
|
779 |
+
|
780 |
+
|
781 |
+
### Methods
|
782 |
+
|
783 |
+
| Method | Description | Parameters | Return Value |
|
784 |
+
|--------|-------------|------------|--------------|
|
785 |
+
| `__init__` | Initialize the AgentRearrange | `agents`: List of Agent objects<br>`flow`: String describing the agent flow | None |
|
786 |
+
| `run` | Execute the workflow | `input_data`: Initial input for the first agent | Final output after all agents have processed |
|
787 |
+
|
788 |
+
### Inputs
|
789 |
+
|
790 |
+
| Input | Type | Description |
|
791 |
+
|-------|------|-------------|
|
792 |
+
| `agents` | List[Agent] | List of Agent objects to be orchestrated |
|
793 |
+
| `flow` | str | String describing the flow of agents (e.g., "A -> B, C") |
|
794 |
+
|
795 |
+
### Output
|
796 |
+
|
797 |
+
The `run` method returns the final output after all agents have processed the input according to the specified flow.
|
798 |
+
|
799 |
+
|
800 |
+
|
801 |
+
|
802 |
+
```python
|
803 |
+
|
804 |
+
from datetime import datetime
|
805 |
+
|
806 |
+
from swarms import Agent, AgentRearrange, create_file_in_folder
|
807 |
+
|
808 |
+
chief_medical_officer = Agent(
|
809 |
+
agent_name="Chief Medical Officer",
|
810 |
+
system_prompt="""You are the Chief Medical Officer coordinating a team of medical specialists for viral disease diagnosis.
|
811 |
+
Your responsibilities include:
|
812 |
+
- Gathering initial patient symptoms and medical history
|
813 |
+
- Coordinating with specialists to form differential diagnoses
|
814 |
+
- Synthesizing different specialist opinions into a cohesive diagnosis
|
815 |
+
- Ensuring all relevant symptoms and test results are considered
|
816 |
+
- Making final diagnostic recommendations
|
817 |
+
- Suggesting treatment plans based on team input
|
818 |
+
- Identifying when additional specialists need to be consulted
|
819 |
+
|
820 |
+
Guidelines:
|
821 |
+
1. Always start with a comprehensive patient history
|
822 |
+
2. Consider both common and rare viral conditions
|
823 |
+
3. Factor in patient demographics and risk factors
|
824 |
+
4. Document your reasoning process clearly
|
825 |
+
5. Highlight any critical or emergency symptoms
|
826 |
+
6. Note any limitations or uncertainties in the diagnosis
|
827 |
+
|
828 |
+
Format all responses with clear sections for:
|
829 |
+
- Initial Assessment
|
830 |
+
- Differential Diagnoses
|
831 |
+
- Specialist Consultations Needed
|
832 |
+
- Recommended Next Steps""",
|
833 |
+
model_name="gpt-4o", # Models from litellm -> claude-2
|
834 |
+
max_loops=1,
|
835 |
+
)
|
836 |
+
|
837 |
+
# Viral Disease Specialist
|
838 |
+
virologist = Agent(
|
839 |
+
agent_name="Virologist",
|
840 |
+
system_prompt="""You are a specialist in viral diseases with expertise in:
|
841 |
+
- Respiratory viruses (Influenza, Coronavirus, RSV)
|
842 |
+
- Systemic viral infections (EBV, CMV, HIV)
|
843 |
+
- Childhood viral diseases (Measles, Mumps, Rubella)
|
844 |
+
- Emerging viral threats
|
845 |
+
|
846 |
+
Your role involves:
|
847 |
+
1. Analyzing symptoms specific to viral infections
|
848 |
+
2. Distinguishing between different viral pathogens
|
849 |
+
3. Assessing viral infection patterns and progression
|
850 |
+
4. Recommending specific viral tests
|
851 |
+
5. Evaluating epidemiological factors
|
852 |
+
|
853 |
+
For each case, consider:
|
854 |
+
- Incubation periods
|
855 |
+
- Transmission patterns
|
856 |
+
- Seasonal factors
|
857 |
+
- Geographic prevalence
|
858 |
+
- Patient immune status
|
859 |
+
- Current viral outbreaks
|
860 |
+
|
861 |
+
Provide detailed analysis of:
|
862 |
+
- Characteristic viral symptoms
|
863 |
+
- Disease progression timeline
|
864 |
+
- Risk factors for severe disease
|
865 |
+
- Potential complications""",
|
866 |
+
model_name="gpt-4o",
|
867 |
+
max_loops=1,
|
868 |
+
)
|
869 |
+
|
870 |
+
# Internal Medicine Specialist
|
871 |
+
internist = Agent(
|
872 |
+
agent_name="Internist",
|
873 |
+
system_prompt="""You are an Internal Medicine specialist responsible for:
|
874 |
+
- Comprehensive system-based evaluation
|
875 |
+
- Integration of symptoms across organ systems
|
876 |
+
- Identification of systemic manifestations
|
877 |
+
- Assessment of comorbidities
|
878 |
+
|
879 |
+
For each case, analyze:
|
880 |
+
1. Vital signs and their implications
|
881 |
+
2. System-by-system review (cardiovascular, respiratory, etc.)
|
882 |
+
3. Impact of existing medical conditions
|
883 |
+
4. Medication interactions and contraindications
|
884 |
+
5. Risk stratification
|
885 |
+
|
886 |
+
Consider these aspects:
|
887 |
+
- Age-related factors
|
888 |
+
- Chronic disease impact
|
889 |
+
- Medication history
|
890 |
+
- Social and environmental factors
|
891 |
+
|
892 |
+
Document:
|
893 |
+
- Physical examination findings
|
894 |
+
- System-specific symptoms
|
895 |
+
- Relevant lab abnormalities
|
896 |
+
- Risk factors for complications""",
|
897 |
+
model_name="gpt-4o",
|
898 |
+
max_loops=1,
|
899 |
+
)
|
900 |
+
|
901 |
+
# Diagnostic Synthesizer
|
902 |
+
synthesizer = Agent(
|
903 |
+
agent_name="Diagnostic Synthesizer",
|
904 |
+
system_prompt="""You are responsible for synthesizing all specialist inputs to create a final diagnostic assessment:
|
905 |
+
|
906 |
+
Core responsibilities:
|
907 |
+
1. Integrate findings from all specialists
|
908 |
+
2. Identify patterns and correlations
|
909 |
+
3. Resolve conflicting opinions
|
910 |
+
4. Generate probability-ranked differential diagnoses
|
911 |
+
5. Recommend additional testing if needed
|
912 |
+
|
913 |
+
Analysis framework:
|
914 |
+
- Weight evidence based on reliability and specificity
|
915 |
+
- Consider epidemiological factors
|
916 |
+
- Evaluate diagnostic certainty
|
917 |
+
- Account for test limitations
|
918 |
+
|
919 |
+
Provide structured output including:
|
920 |
+
1. Primary diagnosis with confidence level
|
921 |
+
2. Supporting evidence summary
|
922 |
+
3. Alternative diagnoses to consider
|
923 |
+
4. Recommended confirmatory tests
|
924 |
+
5. Red flags or warning signs
|
925 |
+
6. Follow-up recommendations
|
926 |
+
|
927 |
+
Documentation requirements:
|
928 |
+
- Clear reasoning chain
|
929 |
+
- Evidence quality assessment
|
930 |
+
- Confidence levels for each diagnosis
|
931 |
+
- Knowledge gaps identified
|
932 |
+
- Risk assessment""",
|
933 |
+
model_name="gpt-4o",
|
934 |
+
max_loops=1,
|
935 |
+
)
|
936 |
+
|
937 |
+
# Create agent list
|
938 |
+
agents = [chief_medical_officer, virologist, internist, synthesizer]
|
939 |
+
|
940 |
+
# Define diagnostic flow
|
941 |
+
flow = f"""{chief_medical_officer.agent_name} -> {virologist.agent_name} -> {internist.agent_name} -> {synthesizer.agent_name}"""
|
942 |
+
|
943 |
+
# Create the swarm system
|
944 |
+
diagnosis_system = AgentRearrange(
|
945 |
+
name="Medical-nlp-diagnosis-swarm",
|
946 |
+
description="natural language symptions to diagnosis report",
|
947 |
+
agents=agents,
|
948 |
+
flow=flow,
|
949 |
+
max_loops=1,
|
950 |
+
output_type="all",
|
951 |
+
)
|
952 |
+
|
953 |
+
|
954 |
+
# Example usage
|
955 |
+
if __name__ == "__main__":
|
956 |
+
# Example patient case
|
957 |
+
patient_case = """
|
958 |
+
Patient: 45-year-old female
|
959 |
+
Presenting symptoms:
|
960 |
+
- Fever (101.5°F) for 3 days
|
961 |
+
- Dry cough
|
962 |
+
- Fatigue
|
963 |
+
- Mild shortness of breath
|
964 |
+
Medical history:
|
965 |
+
- Controlled hypertension
|
966 |
+
- No recent travel
|
967 |
+
- Fully vaccinated for COVID-19
|
968 |
+
- No known sick contacts
|
969 |
+
"""
|
970 |
+
|
971 |
+
# Add timestamp to the patient case
|
972 |
+
case_info = f"Timestamp: {datetime.now()}\nPatient Information: {patient_case}"
|
973 |
+
|
974 |
+
# Run the diagnostic process
|
975 |
+
diagnosis = diagnosis_system.run(case_info)
|
976 |
+
|
977 |
+
# Create a folder and file called reports
|
978 |
+
create_file_in_folder(
|
979 |
+
"reports", "medical_analysis_agent_rearrange.md", diagnosis
|
980 |
+
)
|
981 |
+
|
982 |
+
|
983 |
+
```
|
984 |
+
|
985 |
+
## `HierarhicalSwarm`
|
986 |
+
Coming soon...
|
987 |
+
|
988 |
+
|
989 |
+
## `GraphSwarm`
|
990 |
+
|
991 |
+
|
992 |
+
The `GraphSwarm` is a workflow management system designed to orchestrate complex tasks by leveraging the power of graph theory. It enables the creation of a directed acyclic graph (DAG) to model dependencies between tasks and agents. This allows for efficient task assignment, execution, and monitoring.
|
993 |
+
|
994 |
+
Here's a breakdown of how the `GraphSwarm` works:
|
995 |
+
|
996 |
+
1. **Node Creation**: The `GraphSwarm` workflow is composed of nodes, which can be either agents or tasks. Agents are responsible for executing tasks, and tasks represent specific operations that need to be performed. In the example, two agents (`agent1` and `agent2`) and one task (`task1`) are created.
|
997 |
+
2. **Edge Definition**: Edges are used to define the relationships between nodes. In this case, edges are created to connect `agent1` and `agent2` to `task1`, indicating that both agents are capable of executing `task1`.
|
998 |
+
3. **Entry and End Points**: The `GraphSwarm` workflow requires the definition of entry points (where the workflow starts) and end points (where the workflow concludes). In this example, `agent1` and `agent2` are set as entry points, and `task1` is set as the end point.
|
999 |
+
4. **Visualization**: The `GraphSwarm` provides a visualization feature to graphically represent the workflow. This allows for easy understanding and debugging of the workflow structure.
|
1000 |
+
5. **Execution**: The `GraphSwarm` workflow is executed by traversing the graph from the entry points to the end points. In this case, both `agent1` and `agent2` execute `task1` concurrently, and the results are collected.
|
1001 |
+
6. **Results**: The final results of the workflow execution are aggregated and returned. In this example, the result of executing `task1` is "Task completed".
|
1002 |
+
|
1003 |
+
The `GraphSwarm` offers several benefits, including:
|
1004 |
+
|
1005 |
+
* **Concurrency**: Enables the execution of tasks concurrently, improving overall workflow efficiency.
|
1006 |
+
* **Flexibility**: Allows for dynamic task assignment based on agent availability and task requirements.
|
1007 |
+
* **Scalability**: Supports the addition of new agents and tasks as needed, making it suitable for large-scale workflows.
|
1008 |
+
* **Visualization**: Provides a graphical representation of the workflow, facilitating understanding and debugging.
|
1009 |
+
|
1010 |
+
By leveraging the `GraphSwarm`, complex workflows can be efficiently managed, and tasks can be executed in a coordinated and scalable manner.
|
1011 |
+
|
1012 |
+
|
1013 |
+
|
1014 |
+
### Methods
|
1015 |
+
|
1016 |
+
| Method | Description | Parameters | Return Value |
|
1017 |
+
|--------|-------------|------------|--------------|
|
1018 |
+
| `add_node` | Add a node to the graph | `node`: Node object | None |
|
1019 |
+
| `add_edge` | Add an edge to the graph | `edge`: Edge object | None |
|
1020 |
+
| `set_entry_points` | Set the entry points of the graph | `entry_points`: List of node IDs | None |
|
1021 |
+
| `set_end_points` | Set the end points of the graph | `end_points`: List of node IDs | None |
|
1022 |
+
| `visualize` | Generate a visual representation of the graph | None | String representation of the graph |
|
1023 |
+
| `run` | Execute the workflow | None | Dictionary of execution results |
|
1024 |
+
|
1025 |
+
### Inputs
|
1026 |
+
|
1027 |
+
| Input | Type | Description |
|
1028 |
+
|-------|------|-------------|
|
1029 |
+
| `Node` | Object | Represents a node in the graph (agent or task) |
|
1030 |
+
| `Edge` | Object | Represents an edge connecting two nodes |
|
1031 |
+
| `entry_points` | List[str] | List of node IDs where the workflow starts |
|
1032 |
+
| `end_points` | List[str] | List of node IDs where the workflow ends |
|
1033 |
+
|
1034 |
+
### Output
|
1035 |
+
|
1036 |
+
The `run` method returns a dictionary containing the execution results of all nodes in the graph.
|
1037 |
+
|
1038 |
+
|
1039 |
+
|
1040 |
+
```python
|
1041 |
+
import os
|
1042 |
+
|
1043 |
+
from dotenv import load_dotenv
|
1044 |
+
|
1045 |
+
|
1046 |
+
from swarms import Agent, Edge, GraphWorkflow, Node, NodeType
|
1047 |
+
|
1048 |
+
from swarm_models import OpenAIChat
|
1049 |
+
|
1050 |
+
load_dotenv()
|
1051 |
+
|
1052 |
+
api_key = os.environ.get("OPENAI_API_KEY")
|
1053 |
+
|
1054 |
+
llm = OpenAIChat(
|
1055 |
+
temperature=0.5, openai_api_key=api_key, max_tokens=4000
|
1056 |
+
)
|
1057 |
+
agent1 = Agent(llm=llm, max_loops=1, autosave=True, dashboard=True)
|
1058 |
+
agent2 = Agent(llm=llm, max_loops=1, autosave=True, dashboard=True)
|
1059 |
+
|
1060 |
+
def sample_task():
|
1061 |
+
print("Running sample task")
|
1062 |
+
return "Task completed"
|
1063 |
+
|
1064 |
+
wf_graph = GraphWorkflow()
|
1065 |
+
wf_graph.add_node(Node(id="agent1", type=NodeType.AGENT, agent=agent1))
|
1066 |
+
wf_graph.add_node(Node(id="agent2", type=NodeType.AGENT, agent=agent2))
|
1067 |
+
wf_graph.add_node(
|
1068 |
+
Node(id="task1", type=NodeType.TASK, callable=sample_task)
|
1069 |
+
)
|
1070 |
+
wf_graph.add_edge(Edge(source="agent1", target="task1"))
|
1071 |
+
wf_graph.add_edge(Edge(source="agent2", target="task1"))
|
1072 |
+
|
1073 |
+
wf_graph.set_entry_points(["agent1", "agent2"])
|
1074 |
+
wf_graph.set_end_points(["task1"])
|
1075 |
+
|
1076 |
+
print(wf_graph.visualize())
|
1077 |
+
|
1078 |
+
# Run the workflow
|
1079 |
+
results = wf_graph.run()
|
1080 |
+
print("Execution results:", results)
|
1081 |
+
|
1082 |
+
```
|
1083 |
+
|
1084 |
+
## `MixtureOfAgents`
|
1085 |
+
This is an implementation based on the paper: "Mixture-of-Agents Enhances Large Language Model Capabilities" by together.ai, available at [https://arxiv.org/abs/2406.04692](https://arxiv.org/abs/2406.04692). It achieves state-of-the-art (SOTA) results on AlpacaEval 2.0, MT-Bench, and FLASK, surpassing GPT-4 Omni. This architecture is particularly suitable for tasks that require parallelization followed by sequential processing in another loop.
|
1086 |
+
|
1087 |
+
|
1088 |
+
|
1089 |
+
### Methods
|
1090 |
+
|
1091 |
+
| Method | Description | Parameters | Return Value |
|
1092 |
+
|--------|-------------|------------|--------------|
|
1093 |
+
| `__init__` | Initialize the MixtureOfAgents | `name`: Name of the swarm<br>`agents`: List of Agent objects<br>`layers`: Number of processing layers<br>`final_agent`: Agent for final processing | None |
|
1094 |
+
| `run` | Execute the swarm | `task`: Input task for the swarm | Final output after all agents have processed |
|
1095 |
+
|
1096 |
+
### Inputs
|
1097 |
+
|
1098 |
+
| Input | Type | Description |
|
1099 |
+
|-------|------|-------------|
|
1100 |
+
| `name` | str | Name of the swarm |
|
1101 |
+
| `agents` | List[Agent] | List of Agent objects to be used in the swarm |
|
1102 |
+
| `layers` | int | Number of processing layers in the swarm |
|
1103 |
+
| `final_agent` | Agent | Agent responsible for final processing |
|
1104 |
+
|
1105 |
+
### Output
|
1106 |
+
|
1107 |
+
The `run` method returns the final output after all agents have processed the input according to the specified layers and final agent.
|
1108 |
+
|
1109 |
+
|
1110 |
+
```python
|
1111 |
+
|
1112 |
+
import os
|
1113 |
+
from swarms import Agent, MixtureOfAgents
|
1114 |
+
|
1115 |
+
# Agent 1: Financial Statement Analyzer
|
1116 |
+
agent1 = Agent(
|
1117 |
+
agent_name="FinancialStatementAnalyzer",
|
1118 |
+
model_name="gpt-4o",
|
1119 |
+
system_prompt="""You are a Financial Statement Analyzer specializing in 10-K SEC reports. Your primary focus is on analyzing the financial statements, including the balance sheet, income statement, and cash flow statement.
|
1120 |
+
|
1121 |
+
Key responsibilities:
|
1122 |
+
1. Identify and explain significant changes in financial metrics year-over-year.
|
1123 |
+
2. Calculate and interpret key financial ratios (e.g., liquidity ratios, profitability ratios, leverage ratios).
|
1124 |
+
3. Analyze trends in revenue, expenses, and profitability.
|
1125 |
+
4. Highlight any red flags or areas of concern in the financial statements.
|
1126 |
+
5. Provide insights on the company's financial health and performance based on the data.
|
1127 |
+
|
1128 |
+
When analyzing, consider industry standards and compare the company's performance to its peers when possible. Your analysis should be thorough, data-driven, and provide actionable insights for investors and stakeholders.""",
|
1129 |
+
max_loops=1,
|
1130 |
+
autosave=True,
|
1131 |
+
dashboard=False,
|
1132 |
+
verbose=True,
|
1133 |
+
dynamic_temperature_enabled=True,
|
1134 |
+
saved_state_path="financial_statement_analyzer_state.json",
|
1135 |
+
user_name="swarms_corp",
|
1136 |
+
retry_attempts=1,
|
1137 |
+
context_length=200000,
|
1138 |
+
return_step_meta=False,
|
1139 |
+
)
|
1140 |
+
|
1141 |
+
# Agent 2: Risk Assessment Specialist
|
1142 |
+
agent2 = Agent(
|
1143 |
+
agent_name="RiskAssessmentSpecialist",
|
1144 |
+
model_name="gpt-4o",
|
1145 |
+
system_prompt="""You are a Risk Assessment Specialist focusing on 10-K SEC reports. Your primary role is to identify, analyze, and evaluate potential risks disclosed in the report.
|
1146 |
+
|
1147 |
+
Key responsibilities:
|
1148 |
+
1. Thoroughly review the "Risk Factors" section of the 10-K report.
|
1149 |
+
2. Identify and categorize different types of risks (e.g., operational, financial, legal, market, technological).
|
1150 |
+
3. Assess the potential impact and likelihood of each identified risk.
|
1151 |
+
4. Analyze the company's risk mitigation strategies and their effectiveness.
|
1152 |
+
5. Identify any emerging risks not explicitly mentioned but implied by the company's operations or market conditions.
|
1153 |
+
6. Compare the company's risk profile with industry peers when possible.
|
1154 |
+
|
1155 |
+
Your analysis should provide a comprehensive overview of the company's risk landscape, helping stakeholders understand the potential challenges and uncertainties facing the business. Be sure to highlight any critical risks that could significantly impact the company's future performance or viability.""",
|
1156 |
+
max_loops=1,
|
1157 |
+
autosave=True,
|
1158 |
+
dashboard=False,
|
1159 |
+
verbose=True,
|
1160 |
+
dynamic_temperature_enabled=True,
|
1161 |
+
saved_state_path="risk_assessment_specialist_state.json",
|
1162 |
+
user_name="swarms_corp",
|
1163 |
+
retry_attempts=1,
|
1164 |
+
context_length=200000,
|
1165 |
+
return_step_meta=False,
|
1166 |
+
)
|
1167 |
+
|
1168 |
+
# Agent 3: Business Strategy Evaluator
|
1169 |
+
agent3 = Agent(
|
1170 |
+
agent_name="BusinessStrategyEvaluator",
|
1171 |
+
model_name="gpt-4o",
|
1172 |
+
system_prompt="""You are a Business Strategy Evaluator specializing in analyzing 10-K SEC reports. Your focus is on assessing the company's overall strategy, market position, and future outlook.
|
1173 |
+
|
1174 |
+
Key responsibilities:
|
1175 |
+
1. Analyze the company's business description, market opportunities, and competitive landscape.
|
1176 |
+
2. Evaluate the company's products or services, including their market share and growth potential.
|
1177 |
+
3. Assess the effectiveness of the company's current business strategy and its alignment with market trends.
|
1178 |
+
4. Identify key performance indicators (KPIs) and evaluate the company's performance against these metrics.
|
1179 |
+
5. Analyze management's discussion and analysis (MD&A) section to understand their perspective on the business.
|
1180 |
+
6. Identify potential growth opportunities or areas for improvement in the company's strategy.
|
1181 |
+
7. Compare the company's strategic position with key competitors in the industry.
|
1182 |
+
|
1183 |
+
Your analysis should provide insights into the company's strategic direction, its ability to create value, and its potential for future growth. Consider both short-term and long-term perspectives in your evaluation.""",
|
1184 |
+
max_loops=1,
|
1185 |
+
autosave=True,
|
1186 |
+
dashboard=False,
|
1187 |
+
verbose=True,
|
1188 |
+
dynamic_temperature_enabled=True,
|
1189 |
+
saved_state_path="business_strategy_evaluator_state.json",
|
1190 |
+
user_name="swarms_corp",
|
1191 |
+
retry_attempts=1,
|
1192 |
+
context_length=200000,
|
1193 |
+
return_step_meta=False,
|
1194 |
+
)
|
1195 |
+
|
1196 |
+
# Aggregator Agent
|
1197 |
+
aggregator_agent = Agent(
|
1198 |
+
agent_name="10KReportAggregator",
|
1199 |
+
model_name="gpt-4o",
|
1200 |
+
system_prompt="""You are the 10-K Report Aggregator, responsible for synthesizing and summarizing the analyses provided by the Financial Statement Analyzer, Risk Assessment Specialist, and Business Strategy Evaluator. Your goal is to create a comprehensive, coherent, and insightful summary of the 10-K SEC report.
|
1201 |
+
|
1202 |
+
Key responsibilities:
|
1203 |
+
1. Integrate the financial analysis, risk assessment, and business strategy evaluation into a unified report.
|
1204 |
+
2. Identify and highlight the most critical information and insights from each specialist's analysis.
|
1205 |
+
3. Reconcile any conflicting information or interpretations among the specialists' reports.
|
1206 |
+
4. Provide a balanced view of the company's overall performance, risks, and strategic position.
|
1207 |
+
5. Summarize key findings and their potential implications for investors and stakeholders.
|
1208 |
+
6. Identify any areas where further investigation or clarification may be needed.
|
1209 |
+
|
1210 |
+
Your final report should be well-structured, easy to understand, and provide a holistic view of the company based on the 10-K SEC report. It should offer valuable insights for decision-making while acknowledging any limitations or uncertainties in the analysis.""",
|
1211 |
+
max_loops=1,
|
1212 |
+
autosave=True,
|
1213 |
+
dashboard=False,
|
1214 |
+
verbose=True,
|
1215 |
+
dynamic_temperature_enabled=True,
|
1216 |
+
saved_state_path="10k_report_aggregator_state.json",
|
1217 |
+
user_name="swarms_corp",
|
1218 |
+
retry_attempts=1,
|
1219 |
+
context_length=200000,
|
1220 |
+
return_step_meta=False,
|
1221 |
+
)
|
1222 |
+
|
1223 |
+
# Create the Mixture of Agents class
|
1224 |
+
moa = MixtureOfAgents(
|
1225 |
+
agents=[agent1, agent2, agent3],
|
1226 |
+
aggregator_agent=aggregator_agent,
|
1227 |
+
aggregator_system_prompt="""As the 10-K Report Aggregator, your task is to synthesize the analyses provided by the Financial Statement Analyzer, Risk Assessment Specialist, and Business Strategy Evaluator into a comprehensive and coherent report.
|
1228 |
+
|
1229 |
+
Follow these steps:
|
1230 |
+
1. Review and summarize the key points from each specialist's analysis.
|
1231 |
+
2. Identify common themes and insights across the analyses.
|
1232 |
+
3. Highlight any discrepancies or conflicting interpretations, if present.
|
1233 |
+
4. Provide a balanced and integrated view of the company's financial health, risks, and strategic position.
|
1234 |
+
5. Summarize the most critical findings and their potential impact on investors and stakeholders.
|
1235 |
+
6. Suggest areas for further investigation or monitoring, if applicable.
|
1236 |
+
|
1237 |
+
Your final output should be a well-structured, insightful report that offers a holistic view of the company based on the 10-K SEC report analysis.""",
|
1238 |
+
layers=3,
|
1239 |
+
)
|
1240 |
+
|
1241 |
+
# Example usage
|
1242 |
+
company_name = "NVIDIA"
|
1243 |
+
out = moa.run(
|
1244 |
+
f"Analyze the latest 10-K SEC report for {company_name}. Provide a comprehensive summary of the company's financial performance, risk profile, and business strategy."
|
1245 |
+
)
|
1246 |
+
print(out)
|
1247 |
+
|
1248 |
+
```
|
1249 |
+
|
1250 |
+
|
1251 |
+
## SpreadSheetSwarm
|
1252 |
+
The `SpreadSheetSwarm` is designed for concurrent management and oversight of thousands of agents, facilitating a one-to-many approach for efficient task processing and output analysis.
|
1253 |
+
|
1254 |
+
### Key Features
|
1255 |
+
|
1256 |
+
* **Concurrency**: Enables the simultaneous execution of multiple agents, significantly reducing processing time and increasing overall system efficiency.
|
1257 |
+
* **One-to-Many**: Allows a single task to be dynamically distributed among multiple agents, ensuring that each agent is utilized to its full potential.
|
1258 |
+
* **Scalability**: Supports the integration of thousands of agents, making it an ideal solution for large-scale task processing and data analysis.
|
1259 |
+
|
1260 |
+
|
1261 |
+
### Methods
|
1262 |
+
|
1263 |
+
| Method | Description | Parameters | Return Value |
|
1264 |
+
|--------|-------------|------------|--------------|
|
1265 |
+
| `__init__` | Initialize the SpreadSheetSwarm | `name`: Name of the swarm<br>`description`: Description of the swarm<br>`agents`: List of Agent objects<br>`autosave_on`: Boolean to enable autosave<br>`save_file_path`: Path to save the spreadsheet<br>`run_all_agents`: Boolean to run all agents or not<br>`max_loops`: Maximum number of loops | None |
|
1266 |
+
| `run` | Execute the swarm | `task`: Input task for the swarm | Dictionary of agent outputs |
|
1267 |
+
|
1268 |
+
### Inputs
|
1269 |
+
|
1270 |
+
| Input | Type | Description |
|
1271 |
+
|-------|------|-------------|
|
1272 |
+
| `name` | str | Name of the swarm |
|
1273 |
+
| `description` | str | Description of the swarm's purpose |
|
1274 |
+
| `agents` | List[Agent] | List of Agent objects to be used in the swarm |
|
1275 |
+
| `autosave_on` | bool | Enable autosaving of results |
|
1276 |
+
| `save_file_path` | str | Path to save the spreadsheet results |
|
1277 |
+
| `run_all_agents` | bool | Whether to run all agents or select based on relevance |
|
1278 |
+
| `max_loops` | int | Maximum number of processing loops |
|
1279 |
+
|
1280 |
+
### Output
|
1281 |
+
|
1282 |
+
The `run` method returns a dictionary containing the outputs of each agent that processed the task.
|
1283 |
+
|
1284 |
+
|
1285 |
+
[Learn more at the docs here:](https://docs.swarms.world/en/latest/swarms/structs/spreadsheet_swarm/)
|
1286 |
+
|
1287 |
+
```python
|
1288 |
+
import os
|
1289 |
+
from swarms import Agent, SpreadSheetSwarm
|
1290 |
+
from swarm_models import OpenAIChat
|
1291 |
+
|
1292 |
+
# Define custom system prompts for each social media platform
|
1293 |
+
TWITTER_AGENT_SYS_PROMPT = """
|
1294 |
+
You are a Twitter marketing expert specializing in real estate. Your task is to create engaging, concise tweets to promote properties, analyze trends to maximize engagement, and use appropriate hashtags and timing to reach potential buyers.
|
1295 |
+
"""
|
1296 |
+
|
1297 |
+
INSTAGRAM_AGENT_SYS_PROMPT = """
|
1298 |
+
You are an Instagram marketing expert focusing on real estate. Your task is to create visually appealing posts with engaging captions and hashtags to showcase properties, targeting specific demographics interested in real estate.
|
1299 |
+
"""
|
1300 |
+
|
1301 |
+
FACEBOOK_AGENT_SYS_PROMPT = """
|
1302 |
+
You are a Facebook marketing expert for real estate. Your task is to craft posts optimized for engagement and reach on Facebook, including using images, links, and targeted messaging to attract potential property buyers.
|
1303 |
+
"""
|
1304 |
+
|
1305 |
+
LINKEDIN_AGENT_SYS_PROMPT = """
|
1306 |
+
You are a LinkedIn marketing expert for the real estate industry. Your task is to create professional and informative posts, highlighting property features, market trends, and investment opportunities, tailored to professionals and investors.
|
1307 |
+
"""
|
1308 |
+
|
1309 |
+
EMAIL_AGENT_SYS_PROMPT = """
|
1310 |
+
You are an Email marketing expert specializing in real estate. Your task is to write compelling email campaigns to promote properties, focusing on personalization, subject lines, and effective call-to-action strategies to drive conversions.
|
1311 |
+
"""
|
1312 |
+
|
1313 |
+
# Initialize your agents for different social media platforms
|
1314 |
+
agents = [
|
1315 |
+
Agent(
|
1316 |
+
agent_name="Twitter-RealEstate-Agent",
|
1317 |
+
system_prompt=TWITTER_AGENT_SYS_PROMPT,
|
1318 |
+
model_name="gpt-4o",
|
1319 |
+
max_loops=1,
|
1320 |
+
dynamic_temperature_enabled=True,
|
1321 |
+
saved_state_path="twitter_realestate_agent.json",
|
1322 |
+
user_name="realestate_swarms",
|
1323 |
+
retry_attempts=1,
|
1324 |
+
),
|
1325 |
+
Agent(
|
1326 |
+
agent_name="Instagram-RealEstate-Agent",
|
1327 |
+
system_prompt=INSTAGRAM_AGENT_SYS_PROMPT,
|
1328 |
+
model_name="gpt-4o",
|
1329 |
+
max_loops=1,
|
1330 |
+
dynamic_temperature_enabled=True,
|
1331 |
+
saved_state_path="instagram_realestate_agent.json",
|
1332 |
+
user_name="realestate_swarms",
|
1333 |
+
retry_attempts=1,
|
1334 |
+
),
|
1335 |
+
Agent(
|
1336 |
+
agent_name="Facebook-RealEstate-Agent",
|
1337 |
+
system_prompt=FACEBOOK_AGENT_SYS_PROMPT,
|
1338 |
+
model_name="gpt-4o",
|
1339 |
+
max_loops=1,
|
1340 |
+
dynamic_temperature_enabled=True,
|
1341 |
+
saved_state_path="facebook_realestate_agent.json",
|
1342 |
+
user_name="realestate_swarms",
|
1343 |
+
retry_attempts=1,
|
1344 |
+
),
|
1345 |
+
Agent(
|
1346 |
+
agent_name="LinkedIn-RealEstate-Agent",
|
1347 |
+
system_prompt=LINKEDIN_AGENT_SYS_PROMPT,
|
1348 |
+
model_name="gpt-4o",
|
1349 |
+
max_loops=1,
|
1350 |
+
dynamic_temperature_enabled=True,
|
1351 |
+
saved_state_path="linkedin_realestate_agent.json",
|
1352 |
+
user_name="realestate_swarms",
|
1353 |
+
retry_attempts=1,
|
1354 |
+
),
|
1355 |
+
Agent(
|
1356 |
+
agent_name="Email-RealEstate-Agent",
|
1357 |
+
system_prompt=EMAIL_AGENT_SYS_PROMPT,
|
1358 |
+
model_name="gpt-4o",
|
1359 |
+
max_loops=1,
|
1360 |
+
dynamic_temperature_enabled=True,
|
1361 |
+
saved_state_path="email_realestate_agent.json",
|
1362 |
+
user_name="realestate_swarms",
|
1363 |
+
retry_attempts=1,
|
1364 |
+
),
|
1365 |
+
]
|
1366 |
+
|
1367 |
+
# Create a Swarm with the list of agents
|
1368 |
+
swarm = SpreadSheetSwarm(
|
1369 |
+
name="Real-Estate-Marketing-Swarm",
|
1370 |
+
description="A swarm that processes real estate marketing tasks using multiple agents on different threads.",
|
1371 |
+
agents=agents,
|
1372 |
+
autosave_on=True,
|
1373 |
+
save_file_path="real_estate_marketing_spreadsheet.csv",
|
1374 |
+
run_all_agents=False,
|
1375 |
+
max_loops=2,
|
1376 |
+
)
|
1377 |
+
|
1378 |
+
# Run the swarm
|
1379 |
+
swarm.run(
|
1380 |
+
task="""
|
1381 |
+
Create posts to promote luxury properties in North Texas, highlighting their features, location, and investment potential. Include relevant hashtags, images, and engaging captions.
|
1382 |
+
|
1383 |
+
|
1384 |
+
Property:
|
1385 |
+
$10,399,000
|
1386 |
+
1609 Meandering Way Dr, Roanoke, TX 76262
|
1387 |
+
Link to the property: https://www.zillow.com/homedetails/1609-Meandering-Way-Dr-Roanoke-TX-76262/308879785_zpid/
|
1388 |
+
|
1389 |
+
What's special
|
1390 |
+
Unveiling a new custom estate in the prestigious gated Quail Hollow Estates! This impeccable residence, set on a sprawling acre surrounded by majestic trees, features a gourmet kitchen equipped with top-tier Subzero and Wolf appliances. European soft-close cabinets and drawers, paired with a double Cambria Quartzite island, perfect for family gatherings. The first-floor game room&media room add extra layers of entertainment. Step into the outdoor sanctuary, where a sparkling pool and spa, and sunken fire pit, beckon leisure. The lavish master suite features stunning marble accents, custom his&her closets, and a secure storm shelter.Throughout the home,indulge in the visual charm of designer lighting and wallpaper, elevating every space. The property is complete with a 6-car garage and a sports court, catering to the preferences of basketball or pickleball enthusiasts. This residence seamlessly combines luxury&recreational amenities, making it a must-see for the discerning buyer.
|
1391 |
+
|
1392 |
+
Facts & features
|
1393 |
+
Interior
|
1394 |
+
Bedrooms & bathrooms
|
1395 |
+
Bedrooms: 6
|
1396 |
+
Bathrooms: 8
|
1397 |
+
Full bathrooms: 7
|
1398 |
+
1/2 bathrooms: 1
|
1399 |
+
Primary bedroom
|
1400 |
+
Bedroom
|
1401 |
+
Features: Built-in Features, En Suite Bathroom, Walk-In Closet(s)
|
1402 |
+
Cooling
|
1403 |
+
Central Air, Ceiling Fan(s), Electric
|
1404 |
+
Appliances
|
1405 |
+
Included: Built-In Gas Range, Built-In Refrigerator, Double Oven, Dishwasher, Gas Cooktop, Disposal, Ice Maker, Microwave, Range, Refrigerator, Some Commercial Grade, Vented Exhaust Fan, Warming Drawer, Wine Cooler
|
1406 |
+
Features
|
1407 |
+
Wet Bar, Built-in Features, Dry Bar, Decorative/Designer Lighting Fixtures, Eat-in Kitchen, Elevator, High Speed Internet, Kitchen Island, Pantry, Smart Home, Cable TV, Walk-In Closet(s), Wired for Sound
|
1408 |
+
Flooring: Hardwood
|
1409 |
+
Has basement: No
|
1410 |
+
Number of fireplaces: 3
|
1411 |
+
Fireplace features: Living Room, Primary Bedroom
|
1412 |
+
Interior area
|
1413 |
+
Total interior livable area: 10,466 sqft
|
1414 |
+
Total spaces: 12
|
1415 |
+
Parking features: Additional Parking
|
1416 |
+
Attached garage spaces: 6
|
1417 |
+
Carport spaces: 6
|
1418 |
+
Features
|
1419 |
+
Levels: Two
|
1420 |
+
Stories: 2
|
1421 |
+
Patio & porch: Covered
|
1422 |
+
Exterior features: Built-in Barbecue, Barbecue, Gas Grill, Lighting, Outdoor Grill, Outdoor Living Area, Private Yard, Sport Court, Fire Pit
|
1423 |
+
Pool features: Heated, In Ground, Pool, Pool/Spa Combo
|
1424 |
+
Fencing: Wrought Iron
|
1425 |
+
Lot
|
1426 |
+
Size: 1.05 Acres
|
1427 |
+
Details
|
1428 |
+
Additional structures: Outdoor Kitchen
|
1429 |
+
Parcel number: 42232692
|
1430 |
+
Special conditions: Standard
|
1431 |
+
Construction
|
1432 |
+
Type & style
|
1433 |
+
Home type: SingleFamily
|
1434 |
+
Architectural style: Contemporary/Modern,Detached
|
1435 |
+
Property subtype: Single Family Residence
|
1436 |
+
"""
|
1437 |
+
)
|
1438 |
+
|
1439 |
+
```
|
1440 |
+
|
1441 |
+
|
1442 |
+
## `ForestSwarm`
|
1443 |
+
The `ForestSwarm` architecture is designed for efficient task assignment by dynamically selecting the most suitable agent from a collection of trees. This is achieved through asynchronous task processing, where agents are chosen based on their relevance to the task at hand. The relevance is determined by calculating the similarity between the system prompts associated with each agent and the keywords present in the task itself. For a more in-depth understanding of how `ForestSwarm` works, please refer to the [official documentation](https://docs.swarms.world/en/latest/swarms/structs/forest_swarm/).
|
1444 |
+
|
1445 |
+
|
1446 |
+
|
1447 |
+
### Methods
|
1448 |
+
|
1449 |
+
| Method | Description | Parameters | Return Value |
|
1450 |
+
|--------|-------------|------------|--------------|
|
1451 |
+
| `__init__` | Initialize the ForestSwarm | `trees`: List of Tree objects | None |
|
1452 |
+
| `run` | Execute the ForestSwarm | `task`: Input task for the swarm | Output from the most relevant agent |
|
1453 |
+
|
1454 |
+
### Inputs
|
1455 |
+
|
1456 |
+
| Input | Type | Description |
|
1457 |
+
|-------|------|-------------|
|
1458 |
+
| `trees` | List[Tree] | List of Tree objects, each containing TreeAgent objects |
|
1459 |
+
| `task` | str | The task to be processed by the ForestSwarm |
|
1460 |
+
|
1461 |
+
### Output
|
1462 |
+
|
1463 |
+
The `run` method returns the output from the most relevant agent selected based on the input task.
|
1464 |
+
|
1465 |
+
|
1466 |
+
```python
|
1467 |
+
from swarms import TreeAgent, Tree, ForestSwarm
|
1468 |
+
|
1469 |
+
# Create agents with varying system prompts and dynamically generated distances/keywords
|
1470 |
+
agents_tree1 = [
|
1471 |
+
TreeAgent(
|
1472 |
+
system_prompt="""You are an expert Stock Analysis Agent with deep knowledge of financial markets, technical analysis, and fundamental analysis. Your primary function is to analyze stock performance, market trends, and provide actionable insights. When analyzing stocks:
|
1473 |
+
|
1474 |
+
1. Always start with a brief overview of the current market conditions.
|
1475 |
+
2. Use a combination of technical indicators (e.g., moving averages, RSI, MACD) and fundamental metrics (e.g., P/E ratio, EPS growth, debt-to-equity).
|
1476 |
+
3. Consider both short-term and long-term perspectives in your analysis.
|
1477 |
+
4. Provide clear buy, hold, or sell recommendations with supporting rationale.
|
1478 |
+
5. Highlight potential risks and opportunities specific to each stock or sector.
|
1479 |
+
6. Use bullet points for clarity when listing key points or metrics.
|
1480 |
+
7. If relevant, compare the stock to its peers or sector benchmarks.
|
1481 |
+
|
1482 |
+
Remember to maintain objectivity and base your analysis on factual data. If asked about future performance, always include a disclaimer about market unpredictability. Your goal is to provide comprehensive, accurate, and actionable stock analysis to inform investment decisions.""",
|
1483 |
+
agent_name="Stock Analysis Agent",
|
1484 |
+
),
|
1485 |
+
TreeAgent(
|
1486 |
+
system_prompt="""You are a highly skilled Financial Planning Agent, specializing in personal and corporate financial strategies. Your role is to provide comprehensive financial advice tailored to each client's unique situation. When creating financial plans:
|
1487 |
+
|
1488 |
+
1. Begin by asking key questions about the client's financial goals, current situation, and risk tolerance.
|
1489 |
+
2. Develop a holistic view of the client's finances, including income, expenses, assets, and liabilities.
|
1490 |
+
3. Create detailed, step-by-step action plans to achieve financial goals.
|
1491 |
+
4. Provide specific recommendations for budgeting, saving, and investing.
|
1492 |
+
5. Consider tax implications and suggest tax-efficient strategies.
|
1493 |
+
6. Incorporate risk management and insurance planning into your recommendations.
|
1494 |
+
7. Use charts or tables to illustrate financial projections and scenarios.
|
1495 |
+
8. Regularly suggest reviewing and adjusting the plan as circumstances change.
|
1496 |
+
|
1497 |
+
Always prioritize the client's best interests and adhere to fiduciary standards. Explain complex financial concepts in simple terms, and be prepared to justify your recommendations with data and reasoning.""",
|
1498 |
+
agent_name="Financial Planning Agent",
|
1499 |
+
),
|
1500 |
+
TreeAgent(
|
1501 |
+
agent_name="Retirement Strategy Agent",
|
1502 |
+
system_prompt="""You are a specialized Retirement Strategy Agent, focused on helping individuals and couples plan for a secure and comfortable retirement. Your expertise covers various aspects of retirement planning, including savings strategies, investment allocation, and income generation during retirement. When developing retirement strategies:
|
1503 |
+
|
1504 |
+
1. Start by assessing the client's current age, desired retirement age, and expected lifespan.
|
1505 |
+
2. Calculate retirement savings goals based on desired lifestyle and projected expenses.
|
1506 |
+
3. Analyze current retirement accounts (e.g., 401(k), IRA) and suggest optimization strategies.
|
1507 |
+
4. Provide guidance on asset allocation and rebalancing as retirement approaches.
|
1508 |
+
5. Explain various retirement income sources (e.g., Social Security, pensions, annuities).
|
1509 |
+
6. Discuss healthcare costs and long-term care planning.
|
1510 |
+
7. Offer strategies for tax-efficient withdrawals during retirement.
|
1511 |
+
8. Consider estate planning and legacy goals in your recommendations.
|
1512 |
+
|
1513 |
+
Use Monte Carlo simulations or other statistical tools to illustrate the probability of retirement success. Always emphasize the importance of starting early and the power of compound interest. Be prepared to adjust strategies based on changing market conditions or personal circumstances.""",
|
1514 |
+
),
|
1515 |
+
]
|
1516 |
+
|
1517 |
+
agents_tree2 = [
|
1518 |
+
TreeAgent(
|
1519 |
+
system_prompt="""You are a knowledgeable Tax Filing Agent, specializing in personal and business tax preparation and strategy. Your role is to ensure accurate tax filings while maximizing legitimate deductions and credits. When assisting with tax matters:
|
1520 |
+
|
1521 |
+
1. Start by gathering all necessary financial information and documents.
|
1522 |
+
2. Stay up-to-date with the latest tax laws and regulations, including state-specific rules.
|
1523 |
+
3. Identify all applicable deductions and credits based on the client's situation.
|
1524 |
+
4. Provide step-by-step guidance for completing tax forms accurately.
|
1525 |
+
5. Explain tax implications of various financial decisions.
|
1526 |
+
6. Offer strategies for tax-efficient investing and income management.
|
1527 |
+
7. Assist with estimated tax payments for self-employed individuals or businesses.
|
1528 |
+
8. Advise on record-keeping practices for tax purposes.
|
1529 |
+
|
1530 |
+
Always prioritize compliance with tax laws while ethically minimizing tax liability. Be prepared to explain complex tax concepts in simple terms and provide rationale for your recommendations. If a situation is beyond your expertise, advise consulting a certified tax professional or IRS resources.""",
|
1531 |
+
agent_name="Tax Filing Agent",
|
1532 |
+
),
|
1533 |
+
TreeAgent(
|
1534 |
+
system_prompt="""You are a sophisticated Investment Strategy Agent, adept at creating and managing investment portfolios to meet diverse financial goals. Your expertise covers various asset classes, market analysis, and risk management techniques. When developing investment strategies:
|
1535 |
+
|
1536 |
+
1. Begin by assessing the client's investment goals, time horizon, and risk tolerance.
|
1537 |
+
2. Provide a comprehensive overview of different asset classes and their risk-return profiles.
|
1538 |
+
3. Create diversified portfolio recommendations based on modern portfolio theory.
|
1539 |
+
4. Explain the benefits and risks of various investment vehicles (e.g., stocks, bonds, ETFs, mutual funds).
|
1540 |
+
5. Incorporate both passive and active investment strategies as appropriate.
|
1541 |
+
6. Discuss the importance of regular portfolio rebalancing and provide a rebalancing strategy.
|
1542 |
+
7. Consider tax implications of investment decisions and suggest tax-efficient strategies.
|
1543 |
+
8. Provide ongoing market analysis and suggest portfolio adjustments as needed.
|
1544 |
+
|
1545 |
+
Use historical data and forward-looking projections to illustrate potential outcomes. Always emphasize the importance of long-term investing and the risks of market timing. Be prepared to explain complex investment concepts in clear, accessible language.""",
|
1546 |
+
agent_name="Investment Strategy Agent",
|
1547 |
+
),
|
1548 |
+
TreeAgent(
|
1549 |
+
system_prompt="""You are a specialized ROTH IRA Agent, focusing on the intricacies of Roth Individual Retirement Accounts. Your role is to provide expert guidance on Roth IRA rules, benefits, and strategies to maximize their value for retirement planning. When advising on Roth IRAs:
|
1550 |
+
|
1551 |
+
1. Explain the fundamental differences between traditional and Roth IRAs.
|
1552 |
+
2. Clarify Roth IRA contribution limits and income eligibility requirements.
|
1553 |
+
3. Discuss the tax advantages of Roth IRAs, including tax-free growth and withdrawals.
|
1554 |
+
4. Provide guidance on Roth IRA conversion strategies and their tax implications.
|
1555 |
+
5. Explain the five-year rule and how it affects Roth IRA withdrawals.
|
1556 |
+
6. Offer strategies for maximizing Roth IRA contributions, such as the backdoor Roth IRA method.
|
1557 |
+
7. Discuss how Roth IRAs fit into overall retirement and estate planning strategies.
|
1558 |
+
8. Provide insights on investment choices within a Roth IRA to maximize tax-free growth.
|
1559 |
+
|
1560 |
+
Always stay current with IRS regulations regarding Roth IRAs. Be prepared to provide numerical examples to illustrate the long-term benefits of Roth IRAs. Emphasize the importance of considering individual financial situations when making Roth IRA decisions.""",
|
1561 |
+
agent_name="ROTH IRA Agent",
|
1562 |
+
),
|
1563 |
+
]
|
1564 |
+
|
1565 |
+
# Create trees
|
1566 |
+
tree1 = Tree(tree_name="Financial Tree", agents=agents_tree1)
|
1567 |
+
tree2 = Tree(tree_name="Investment Tree", agents=agents_tree2)
|
1568 |
+
|
1569 |
+
# Create the ForestSwarm
|
1570 |
+
multi_agent_structure = ForestSwarm(trees=[tree1, tree2])
|
1571 |
+
|
1572 |
+
# Run a task
|
1573 |
+
task = "What are the best platforms to do our taxes on"
|
1574 |
+
output = multi_agent_structure.run(task)
|
1575 |
+
print(output)
|
1576 |
+
|
1577 |
+
```
|
1578 |
+
|
1579 |
+
|
1580 |
+
|
1581 |
+
|
1582 |
+
## `SwarmRouter`
|
1583 |
+
The `SwarmRouter` class is a flexible routing system designed to manage different types of swarms for task execution. It provides a unified interface to interact with various swarm types, including `AgentRearrange`, `MixtureOfAgents`, `SpreadSheetSwarm`, `SequentialWorkflow`, and `ConcurrentWorkflow`. We will be continuously adding more and more swarm architectures here as we progress with new architectures.
|
1584 |
+
|
1585 |
+
#### Attributes:
|
1586 |
+
- `name` (str): Name of the SwarmRouter instance.
|
1587 |
+
- `description` (str): Description of the SwarmRouter instance.
|
1588 |
+
- `max_loops` (int): Maximum number of loops to perform.
|
1589 |
+
- `agents` (List[Agent]): List of Agent objects to be used in the swarm.
|
1590 |
+
- `swarm_type` (SwarmType): Type of swarm to be used.
|
1591 |
+
- `swarm` (Union[AgentRearrange, MixtureOfAgents, SpreadSheetSwarm, SequentialWorkflow, ConcurrentWorkflow]): Instantiated swarm object.
|
1592 |
+
- `logs` (List[SwarmLog]): List of log entries captured during operations.
|
1593 |
+
|
1594 |
+
#### Methods:
|
1595 |
+
- `__init__(self, name: str, description: str, max_loops: int, agents: List[Agent], swarm_type: SwarmType, *args, **kwargs)`: Initialize the SwarmRouter.
|
1596 |
+
- `_create_swarm(self, *args, **kwargs)`: Create and return the specified swarm type.
|
1597 |
+
- `_log(self, level: str, message: str, task: str, metadata: Dict[str, Any])`: Create a log entry and add it to the logs list.
|
1598 |
+
- `run(self, task: str, *args, **kwargs)`: Run the specified task on the selected swarm.
|
1599 |
+
- `get_logs(self)`: Retrieve all logged entries.
|
1600 |
+
|
1601 |
+
|
1602 |
+
```python
|
1603 |
+
import os
|
1604 |
+
from dotenv import load_dotenv
|
1605 |
+
from swarms import Agent
|
1606 |
+
from swarm_models import OpenAIChat
|
1607 |
+
from swarms.structs.swarm_router import SwarmRouter, SwarmType
|
1608 |
+
|
1609 |
+
load_dotenv()
|
1610 |
+
|
1611 |
+
# Get the OpenAI API key from the environment variable
|
1612 |
+
api_key = os.getenv("GROQ_API_KEY")
|
1613 |
+
|
1614 |
+
# Model
|
1615 |
+
model = OpenAIChat(
|
1616 |
+
openai_api_base="https://api.groq.com/openai/v1",
|
1617 |
+
openai_api_key=api_key,
|
1618 |
+
model_name="llama-3.1-70b-versatile",
|
1619 |
+
temperature=0.1,
|
1620 |
+
)
|
1621 |
+
# Define specialized system prompts for each agent
|
1622 |
+
DATA_EXTRACTOR_PROMPT = """You are a highly specialized private equity agent focused on data extraction from various documents. Your expertise includes:
|
1623 |
+
1. Extracting key financial metrics (revenue, EBITDA, growth rates, etc.) from financial statements and reports
|
1624 |
+
2. Identifying and extracting important contract terms from legal documents
|
1625 |
+
3. Pulling out relevant market data from industry reports and analyses
|
1626 |
+
4. Extracting operational KPIs from management presentations and internal reports
|
1627 |
+
5. Identifying and extracting key personnel information from organizational charts and bios
|
1628 |
+
Provide accurate, structured data extracted from various document types to support investment analysis."""
|
1629 |
+
|
1630 |
+
SUMMARIZER_PROMPT = """You are an expert private equity agent specializing in summarizing complex documents. Your core competencies include:
|
1631 |
+
1. Distilling lengthy financial reports into concise executive summaries
|
1632 |
+
2. Summarizing legal documents, highlighting key terms and potential risks
|
1633 |
+
3. Condensing industry reports to capture essential market trends and competitive dynamics
|
1634 |
+
4. Summarizing management presentations to highlight key strategic initiatives and projections
|
1635 |
+
5. Creating brief overviews of technical documents, emphasizing critical points for non-technical stakeholders
|
1636 |
+
Deliver clear, concise summaries that capture the essence of various documents while highlighting information crucial for investment decisions."""
|
1637 |
+
|
1638 |
+
FINANCIAL_ANALYST_PROMPT = """You are a specialized private equity agent focused on financial analysis. Your key responsibilities include:
|
1639 |
+
1. Analyzing historical financial statements to identify trends and potential issues
|
1640 |
+
2. Evaluating the quality of earnings and potential adjustments to EBITDA
|
1641 |
+
3. Assessing working capital requirements and cash flow dynamics
|
1642 |
+
4. Analyzing capital structure and debt capacity
|
1643 |
+
5. Evaluating financial projections and underlying assumptions
|
1644 |
+
Provide thorough, insightful financial analysis to inform investment decisions and valuation."""
|
1645 |
+
|
1646 |
+
MARKET_ANALYST_PROMPT = """You are a highly skilled private equity agent specializing in market analysis. Your expertise covers:
|
1647 |
+
1. Analyzing industry trends, growth drivers, and potential disruptors
|
1648 |
+
2. Evaluating competitive landscape and market positioning
|
1649 |
+
3. Assessing market size, segmentation, and growth potential
|
1650 |
+
4. Analyzing customer dynamics, including concentration and loyalty
|
1651 |
+
5. Identifying potential regulatory or macroeconomic impacts on the market
|
1652 |
+
Deliver comprehensive market analysis to assess the attractiveness and risks of potential investments."""
|
1653 |
+
|
1654 |
+
OPERATIONAL_ANALYST_PROMPT = """You are an expert private equity agent focused on operational analysis. Your core competencies include:
|
1655 |
+
1. Evaluating operational efficiency and identifying improvement opportunities
|
1656 |
+
2. Analyzing supply chain and procurement processes
|
1657 |
+
3. Assessing sales and marketing effectiveness
|
1658 |
+
4. Evaluating IT systems and digital capabilities
|
1659 |
+
5. Identifying potential synergies in merger or add-on acquisition scenarios
|
1660 |
+
Provide detailed operational analysis to uncover value creation opportunities and potential risks."""
|
1661 |
+
|
1662 |
+
# Initialize specialized agents
|
1663 |
+
data_extractor_agent = Agent(
|
1664 |
+
agent_name="Data-Extractor",
|
1665 |
+
system_prompt=DATA_EXTRACTOR_PROMPT,
|
1666 |
+
llm=model,
|
1667 |
+
max_loops=1,
|
1668 |
+
autosave=True,
|
1669 |
+
verbose=True,
|
1670 |
+
dynamic_temperature_enabled=True,
|
1671 |
+
saved_state_path="data_extractor_agent.json",
|
1672 |
+
user_name="pe_firm",
|
1673 |
+
retry_attempts=1,
|
1674 |
+
context_length=200000,
|
1675 |
+
output_type="string",
|
1676 |
+
)
|
1677 |
+
|
1678 |
+
summarizer_agent = Agent(
|
1679 |
+
agent_name="Document-Summarizer",
|
1680 |
+
system_prompt=SUMMARIZER_PROMPT,
|
1681 |
+
llm=model,
|
1682 |
+
max_loops=1,
|
1683 |
+
autosave=True,
|
1684 |
+
verbose=True,
|
1685 |
+
dynamic_temperature_enabled=True,
|
1686 |
+
saved_state_path="summarizer_agent.json",
|
1687 |
+
user_name="pe_firm",
|
1688 |
+
retry_attempts=1,
|
1689 |
+
context_length=200000,
|
1690 |
+
output_type="string",
|
1691 |
+
)
|
1692 |
+
|
1693 |
+
financial_analyst_agent = Agent(
|
1694 |
+
agent_name="Financial-Analyst",
|
1695 |
+
system_prompt=FINANCIAL_ANALYST_PROMPT,
|
1696 |
+
llm=model,
|
1697 |
+
max_loops=1,
|
1698 |
+
autosave=True,
|
1699 |
+
verbose=True,
|
1700 |
+
dynamic_temperature_enabled=True,
|
1701 |
+
saved_state_path="financial_analyst_agent.json",
|
1702 |
+
user_name="pe_firm",
|
1703 |
+
retry_attempts=1,
|
1704 |
+
context_length=200000,
|
1705 |
+
output_type="string",
|
1706 |
+
)
|
1707 |
+
|
1708 |
+
market_analyst_agent = Agent(
|
1709 |
+
agent_name="Market-Analyst",
|
1710 |
+
system_prompt=MARKET_ANALYST_PROMPT,
|
1711 |
+
llm=model,
|
1712 |
+
max_loops=1,
|
1713 |
+
autosave=True,
|
1714 |
+
verbose=True,
|
1715 |
+
dynamic_temperature_enabled=True,
|
1716 |
+
saved_state_path="market_analyst_agent.json",
|
1717 |
+
user_name="pe_firm",
|
1718 |
+
retry_attempts=1,
|
1719 |
+
context_length=200000,
|
1720 |
+
output_type="string",
|
1721 |
+
)
|
1722 |
+
|
1723 |
+
operational_analyst_agent = Agent(
|
1724 |
+
agent_name="Operational-Analyst",
|
1725 |
+
system_prompt=OPERATIONAL_ANALYST_PROMPT,
|
1726 |
+
llm=model,
|
1727 |
+
max_loops=1,
|
1728 |
+
autosave=True,
|
1729 |
+
verbose=True,
|
1730 |
+
dynamic_temperature_enabled=True,
|
1731 |
+
saved_state_path="operational_analyst_agent.json",
|
1732 |
+
user_name="pe_firm",
|
1733 |
+
retry_attempts=1,
|
1734 |
+
context_length=200000,
|
1735 |
+
output_type="string",
|
1736 |
+
)
|
1737 |
+
|
1738 |
+
# Initialize the SwarmRouter
|
1739 |
+
router = SwarmRouter(
|
1740 |
+
name="pe-document-analysis-swarm",
|
1741 |
+
description="Analyze documents for private equity due diligence and investment decision-making",
|
1742 |
+
max_loops=1,
|
1743 |
+
agents=[
|
1744 |
+
data_extractor_agent,
|
1745 |
+
summarizer_agent,
|
1746 |
+
financial_analyst_agent,
|
1747 |
+
market_analyst_agent,
|
1748 |
+
operational_analyst_agent,
|
1749 |
+
],
|
1750 |
+
swarm_type="ConcurrentWorkflow", # or "SequentialWorkflow" or "ConcurrentWorkflow" or
|
1751 |
+
)
|
1752 |
+
|
1753 |
+
# Example usage
|
1754 |
+
if __name__ == "__main__":
|
1755 |
+
# Run a comprehensive private equity document analysis task
|
1756 |
+
result = router.run(
|
1757 |
+
"Where is the best place to find template term sheets for series A startups. Provide links and references"
|
1758 |
+
)
|
1759 |
+
print(result)
|
1760 |
+
|
1761 |
+
# Retrieve and print logs
|
1762 |
+
for log in router.get_logs():
|
1763 |
+
print(f"{log.timestamp} - {log.level}: {log.message}")
|
1764 |
+
|
1765 |
+
```
|
1766 |
+
|
1767 |
+
### Changing Swarm Types
|
1768 |
+
|
1769 |
+
You can create multiple SwarmRouter instances with different swarm types:
|
1770 |
+
|
1771 |
+
```python
|
1772 |
+
sequential_router = SwarmRouter(
|
1773 |
+
name="SequentialRouter",
|
1774 |
+
agents=[
|
1775 |
+
data_extractor_agent,
|
1776 |
+
summarizer_agent,
|
1777 |
+
financial_analyst_agent,
|
1778 |
+
market_analyst_agent,
|
1779 |
+
operational_analyst_agent,
|
1780 |
+
],
|
1781 |
+
swarm_type=SwarmType.SequentialWorkflow
|
1782 |
+
)
|
1783 |
+
|
1784 |
+
concurrent_router = SwarmRouter(
|
1785 |
+
name="ConcurrentRouter",
|
1786 |
+
agents=[
|
1787 |
+
data_extractor_agent,
|
1788 |
+
summarizer_agent,
|
1789 |
+
financial_analyst_agent,
|
1790 |
+
market_analyst_agent,
|
1791 |
+
operational_analyst_agent,
|
1792 |
+
],
|
1793 |
+
swarm_type=SwarmType.ConcurrentWorkflow
|
1794 |
+
)
|
1795 |
+
```
|
1796 |
+
|
1797 |
+
### AgentRearrange
|
1798 |
+
|
1799 |
+
Use Case: Optimizing agent order for complex multi-step tasks.
|
1800 |
+
|
1801 |
+
```python
|
1802 |
+
rearrange_router = SwarmRouter(
|
1803 |
+
name="TaskOptimizer",
|
1804 |
+
description="Optimize agent order for multi-step tasks",
|
1805 |
+
max_loops=3,
|
1806 |
+
agents=[
|
1807 |
+
data_extractor_agent,
|
1808 |
+
summarizer_agent,
|
1809 |
+
financial_analyst_agent,
|
1810 |
+
market_analyst_agent,
|
1811 |
+
operational_analyst_agent,
|
1812 |
+
],
|
1813 |
+
swarm_type=SwarmType.AgentRearrange,
|
1814 |
+
flow = f"{data_extractor.name} -> {analyzer.name} -> {summarizer.name}"
|
1815 |
+
)
|
1816 |
+
|
1817 |
+
result = rearrange_router.run("Analyze and summarize the quarterly financial report")
|
1818 |
+
```
|
1819 |
+
|
1820 |
+
### MixtureOfAgents
|
1821 |
+
|
1822 |
+
Use Case: Combining diverse expert agents for comprehensive analysis.
|
1823 |
+
|
1824 |
+
```python
|
1825 |
+
mixture_router = SwarmRouter(
|
1826 |
+
name="ExpertPanel",
|
1827 |
+
description="Combine insights from various expert agents",
|
1828 |
+
max_loops=1,
|
1829 |
+
agents=[
|
1830 |
+
data_extractor_agent,
|
1831 |
+
summarizer_agent,
|
1832 |
+
financial_analyst_agent,
|
1833 |
+
market_analyst_agent,
|
1834 |
+
operational_analyst_agent,
|
1835 |
+
],
|
1836 |
+
swarm_type=SwarmType.MixtureOfAgents
|
1837 |
+
)
|
1838 |
+
|
1839 |
+
result = mixture_router.run("Evaluate the potential acquisition of TechStartup Inc.")
|
1840 |
+
```
|
1841 |
+
|
1842 |
+
|
1843 |
+
|
1844 |
+
----------
|
1845 |
+
|
1846 |
+
## Onboarding Session
|
1847 |
+
Get onboarded now with the creator and lead maintainer of Swarms, Kye Gomez, who will show you how to get started with the installation, usage examples, and starting to build your custom use case! [CLICK HERE](https://cal.com/swarms/swarms-onboarding-session)
|
1848 |
+
|
1849 |
+
|
1850 |
+
---
|
1851 |
+
|
1852 |
+
## Documentation
|
1853 |
+
Documentation is located here at: [docs.swarms.world](https://docs.swarms.world)
|
1854 |
+
|
1855 |
+
-----
|
1856 |
+
|
1857 |
+
## Folder Structure
|
1858 |
+
The swarms package has been meticlously crafted for extreme use-ability and understanding, the swarms package is split up into various modules such as `swarms.agents` that holds pre-built agents, `swarms.structs` that holds a vast array of structures like `Agent` and multi agent structures. The 3 most important are `structs`, `models`, and `agents`.
|
1859 |
+
|
1860 |
+
```sh
|
1861 |
+
├── __init__.py
|
1862 |
+
├── agents
|
1863 |
+
├── artifacts
|
1864 |
+
├── memory
|
1865 |
+
├── schemas
|
1866 |
+
├── models -> swarm_models
|
1867 |
+
├── prompts
|
1868 |
+
├── structs
|
1869 |
+
├── telemetry
|
1870 |
+
├── tools
|
1871 |
+
├── utils
|
1872 |
+
└── workers
|
1873 |
+
```
|
1874 |
+
|
1875 |
+
----
|
1876 |
+
|
1877 |
+
## 🫶 Contributions:
|
1878 |
+
|
1879 |
+
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)
|
1880 |
+
|
1881 |
+
Swarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) to participate in Roadmap discussions!
|
1882 |
+
|
1883 |
+
----
|
1884 |
+
|
1885 |
+
|
1886 |
+
|
1887 |
+
## Accelerate Backlog
|
1888 |
+
Accelerate Bugs, Features, and Demos to implement by supporting us here:
|
1889 |
+
|
1890 |
+
<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>
|
1891 |
+
|
1892 |
+
## Community
|
1893 |
+
|
1894 |
+
Join our growing community around the world, for real-time support, ideas, and discussions on Swarms 😊
|
1895 |
+
|
1896 |
+
- View our official [Blog](https://docs.swarms.world)
|
1897 |
+
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
|
1898 |
+
- Follow us on [Twitter](https://twitter.com/kyegomez)
|
1899 |
+
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
|
1900 |
+
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
|
1901 |
+
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
|
1902 |
+
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)
|
1903 |
+
|
1904 |
+
# License
|
1905 |
+
|
1906 |
+
GNU AFFERO GENERAL PUBLIC LICENSE
|
SECURITY.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Security Policy
|
2 |
+
===============
|
3 |
+
|
4 |
+
| Security Feature | Benefit | Description |
|
5 |
+
|-------------------------------|------------------------------------------|-----------------------------------------------------------------------------|
|
6 |
+
| Environment Variables | Secure Configuration | Uses environment variables to manage sensitive configurations securely. |
|
7 |
+
| No Telemetry | Enhanced Privacy | Prioritizes user privacy by not collecting telemetry data. |
|
8 |
+
| Data Encryption | Data Protection | Encrypts sensitive data to protect it from unauthorized access. |
|
9 |
+
| Authentication | Access Control | Ensures that only authorized users can access the system. |
|
10 |
+
| Authorization | Fine-grained Access | Provides specific access rights to users based on roles and permissions. |
|
11 |
+
| Dependency Security | Reduced Vulnerabilities | Securely manages dependencies to prevent vulnerabilities. |
|
12 |
+
| Secure Installation | Integrity Assurance | Ensures the integrity of the software through verified sources and checksums.|
|
13 |
+
| Regular Updates | Ongoing Protection | Keeps the system secure by regularly updating to patch vulnerabilities. |
|
14 |
+
| Logging and Monitoring | Operational Oversight | Tracks system activity for security monitoring and anomaly detection. |
|
15 |
+
| Error Handling | Robust Security | Manages errors securely to prevent leakage of sensitive information. |
|
16 |
+
| Data Storage Security | Secure Data Handling | Stores data securely, ensuring confidentiality and integrity. |
|
17 |
+
| Data Transmission Security | Secure Data Transfer | Protects data during transit from eavesdropping and tampering. |
|
18 |
+
| Access Control Mechanisms | Restricted Access | Limits system access to authorized personnel only. |
|
19 |
+
| Vulnerability Management | Proactive Protection | Identifies and mitigates security vulnerabilities effectively. |
|
20 |
+
| Regulatory Compliance | Legal Conformity | Ensures that the system adheres to relevant legal and regulatory standards. |
|
21 |
+
| Security Audits |
|
22 |
+
|
23 |
+
|
24 |
+
# Reporting a Vulnerability
|
25 |
+
-------------------------
|
26 |
+
|
27 |
+
* * * * *
|
28 |
+
|
29 |
+
If you discover a security vulnerability in any of the above versions, please report it immediately to our security team by sending an email to [email protected]. We take security vulnerabilities seriously and appreciate your efforts in disclosing them responsibly.
|
30 |
+
|
31 |
+
Please provide detailed information on the vulnerability, including steps to reproduce, potential impact, and any known mitigations. Our security team will acknowledge receipt of your report within 24 hours and will provide regular updates on the progress of the investigation.
|
32 |
+
|
33 |
+
Once the vulnerability has been thoroughly assessed, we will take the necessary steps to address it. This may include releasing a security patch, issuing a security advisory, or implementing other appropriate mitigations.
|
34 |
+
|
35 |
+
We aim to respond to all vulnerability reports in a timely manner and work towards resolving them as quickly as possible. We thank you for your contribution to the security of our software.
|
36 |
+
|
37 |
+
Please note that any vulnerability reports that are not related to the specified versions or do not provide sufficient information may be declined.
|
38 |
+
|
api/advanced_api.py
ADDED
@@ -0,0 +1,1282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import multiprocessing
|
2 |
+
import os
|
3 |
+
import secrets
|
4 |
+
import signal
|
5 |
+
import sys
|
6 |
+
import threading
|
7 |
+
import time
|
8 |
+
import traceback
|
9 |
+
from concurrent.futures import ThreadPoolExecutor
|
10 |
+
from dataclasses import dataclass
|
11 |
+
from datetime import datetime, timedelta
|
12 |
+
from enum import Enum
|
13 |
+
from multiprocessing import Lock, Process, Queue, Value
|
14 |
+
from pathlib import Path
|
15 |
+
from typing import Any, Dict, List, Optional, Tuple
|
16 |
+
from uuid import UUID, uuid4
|
17 |
+
|
18 |
+
import httpx
|
19 |
+
import psutil
|
20 |
+
import uvicorn
|
21 |
+
from dotenv import load_dotenv
|
22 |
+
from fastapi import (
|
23 |
+
BackgroundTasks,
|
24 |
+
Depends,
|
25 |
+
FastAPI,
|
26 |
+
Header,
|
27 |
+
HTTPException,
|
28 |
+
Query,
|
29 |
+
Request,
|
30 |
+
status,
|
31 |
+
)
|
32 |
+
from fastapi.middleware.cors import CORSMiddleware
|
33 |
+
from loguru import logger
|
34 |
+
from pydantic import BaseModel, Field
|
35 |
+
|
36 |
+
from swarms.structs.agent import Agent
|
37 |
+
|
38 |
+
# Load environment variables
|
39 |
+
load_dotenv()
|
40 |
+
|
41 |
+
|
42 |
+
# # Set start method to 'fork' at the very beginning of the script
|
43 |
+
# multiprocessing.set_start_method('fork')
|
44 |
+
|
45 |
+
|
46 |
+
@dataclass
|
47 |
+
class ProcessMetrics:
|
48 |
+
"""Metrics for each API process."""
|
49 |
+
|
50 |
+
pid: int
|
51 |
+
cpu_usage: float
|
52 |
+
memory_usage: float
|
53 |
+
request_count: int
|
54 |
+
last_heartbeat: float
|
55 |
+
port: int
|
56 |
+
|
57 |
+
|
58 |
+
class ProcessManager:
|
59 |
+
"""Manages multiple API processes and their metrics."""
|
60 |
+
|
61 |
+
def __init__(
|
62 |
+
self, num_processes: int = None, start_port: int = 8000
|
63 |
+
):
|
64 |
+
self.num_processes = (
|
65 |
+
num_processes or multiprocessing.cpu_count()
|
66 |
+
)
|
67 |
+
self.start_port = start_port
|
68 |
+
self.processes: Dict[int, Process] = {}
|
69 |
+
self.metrics: Dict[int, ProcessMetrics] = {}
|
70 |
+
self.metrics_lock = Lock()
|
71 |
+
self.heartbeat_queue = Queue()
|
72 |
+
self.shutdown_event = multiprocessing.Event()
|
73 |
+
|
74 |
+
def start_api_process(self, port: int) -> Process:
|
75 |
+
"""Start a single API process on the specified port."""
|
76 |
+
process = Process(
|
77 |
+
target=run_api_instance,
|
78 |
+
args=(port, self.heartbeat_queue, self.shutdown_event),
|
79 |
+
)
|
80 |
+
process.start()
|
81 |
+
return process
|
82 |
+
|
83 |
+
def start_all_processes(self):
|
84 |
+
"""Start all API processes."""
|
85 |
+
for i in range(self.num_processes):
|
86 |
+
port = self.start_port + i + 1
|
87 |
+
process = self.start_api_process(port)
|
88 |
+
self.processes[process.pid] = process
|
89 |
+
self.metrics[process.pid] = ProcessMetrics(
|
90 |
+
pid=process.pid,
|
91 |
+
cpu_usage=0.0,
|
92 |
+
memory_usage=0.0,
|
93 |
+
request_count=0,
|
94 |
+
last_heartbeat=time.time(),
|
95 |
+
port=port,
|
96 |
+
)
|
97 |
+
|
98 |
+
def monitor_processes(self):
|
99 |
+
"""Monitor process health and metrics."""
|
100 |
+
while not self.shutdown_event.is_set():
|
101 |
+
try:
|
102 |
+
# Update metrics from heartbeat queue
|
103 |
+
while not self.heartbeat_queue.empty():
|
104 |
+
pid, cpu, memory, requests = (
|
105 |
+
self.heartbeat_queue.get_nowait()
|
106 |
+
)
|
107 |
+
with self.metrics_lock:
|
108 |
+
if pid in self.metrics:
|
109 |
+
self.metrics[pid].cpu_usage = cpu
|
110 |
+
self.metrics[pid].memory_usage = memory
|
111 |
+
self.metrics[pid].request_count = requests
|
112 |
+
self.metrics[pid].last_heartbeat = (
|
113 |
+
time.time()
|
114 |
+
)
|
115 |
+
|
116 |
+
# Check for dead processes and restart them
|
117 |
+
current_time = time.time()
|
118 |
+
with self.metrics_lock:
|
119 |
+
for pid, metrics in list(self.metrics.items()):
|
120 |
+
if (
|
121 |
+
current_time - metrics.last_heartbeat > 30
|
122 |
+
): # 30 seconds timeout
|
123 |
+
print(
|
124 |
+
f"Process {pid} appears to be dead, restarting..."
|
125 |
+
)
|
126 |
+
if pid in self.processes:
|
127 |
+
self.processes[pid].terminate()
|
128 |
+
del self.processes[pid]
|
129 |
+
new_process = self.start_api_process(
|
130 |
+
metrics.port
|
131 |
+
)
|
132 |
+
self.processes[new_process.pid] = (
|
133 |
+
new_process
|
134 |
+
)
|
135 |
+
self.metrics[new_process.pid] = (
|
136 |
+
ProcessMetrics(
|
137 |
+
pid=new_process.pid,
|
138 |
+
cpu_usage=0.0,
|
139 |
+
memory_usage=0.0,
|
140 |
+
request_count=0,
|
141 |
+
last_heartbeat=time.time(),
|
142 |
+
port=metrics.port,
|
143 |
+
)
|
144 |
+
)
|
145 |
+
del self.metrics[pid]
|
146 |
+
|
147 |
+
time.sleep(1)
|
148 |
+
except Exception as e:
|
149 |
+
print(f"Error in process monitoring: {e}")
|
150 |
+
|
151 |
+
def shutdown(self):
|
152 |
+
"""Shutdown all processes gracefully."""
|
153 |
+
self.shutdown_event.set()
|
154 |
+
for process in self.processes.values():
|
155 |
+
process.terminate()
|
156 |
+
process.join()
|
157 |
+
|
158 |
+
|
159 |
+
class AgentStatus(str, Enum):
|
160 |
+
"""Enum for agent status."""
|
161 |
+
|
162 |
+
IDLE = "idle"
|
163 |
+
PROCESSING = "processing"
|
164 |
+
ERROR = "error"
|
165 |
+
MAINTENANCE = "maintenance"
|
166 |
+
|
167 |
+
|
168 |
+
# Security configurations
|
169 |
+
API_KEY_LENGTH = 32 # Length of generated API keys
|
170 |
+
|
171 |
+
|
172 |
+
class APIKey(BaseModel):
|
173 |
+
key: str
|
174 |
+
name: str
|
175 |
+
created_at: datetime
|
176 |
+
last_used: datetime
|
177 |
+
is_active: bool = True
|
178 |
+
|
179 |
+
|
180 |
+
class APIKeyCreate(BaseModel):
|
181 |
+
name: str # A friendly name for the API key
|
182 |
+
|
183 |
+
|
184 |
+
class User(BaseModel):
|
185 |
+
id: UUID
|
186 |
+
username: str
|
187 |
+
is_active: bool = True
|
188 |
+
is_admin: bool = False
|
189 |
+
api_keys: Dict[str, APIKey] = {} # key -> APIKey object
|
190 |
+
|
191 |
+
|
192 |
+
class AgentConfig(BaseModel):
|
193 |
+
"""Configuration model for creating a new agent."""
|
194 |
+
|
195 |
+
agent_name: str = Field(..., description="Name of the agent")
|
196 |
+
model_name: str = Field(
|
197 |
+
...,
|
198 |
+
description="Name of the llm you want to use provided by litellm",
|
199 |
+
)
|
200 |
+
description: str = Field(
|
201 |
+
default="", description="Description of the agent's purpose"
|
202 |
+
)
|
203 |
+
system_prompt: str = Field(
|
204 |
+
..., description="System prompt for the agent"
|
205 |
+
)
|
206 |
+
model_name: str = Field(
|
207 |
+
default="gpt-4", description="Model name to use"
|
208 |
+
)
|
209 |
+
temperature: float = Field(
|
210 |
+
default=0.1,
|
211 |
+
ge=0.0,
|
212 |
+
le=2.0,
|
213 |
+
description="Temperature for the model",
|
214 |
+
)
|
215 |
+
max_loops: int = Field(
|
216 |
+
default=1, ge=1, description="Maximum number of loops"
|
217 |
+
)
|
218 |
+
autosave: bool = Field(
|
219 |
+
default=True, description="Enable autosave"
|
220 |
+
)
|
221 |
+
dashboard: bool = Field(
|
222 |
+
default=False, description="Enable dashboard"
|
223 |
+
)
|
224 |
+
verbose: bool = Field(
|
225 |
+
default=True, description="Enable verbose output"
|
226 |
+
)
|
227 |
+
dynamic_temperature_enabled: bool = Field(
|
228 |
+
default=True, description="Enable dynamic temperature"
|
229 |
+
)
|
230 |
+
user_name: str = Field(
|
231 |
+
default="default_user", description="Username for the agent"
|
232 |
+
)
|
233 |
+
retry_attempts: int = Field(
|
234 |
+
default=1, ge=1, description="Number of retry attempts"
|
235 |
+
)
|
236 |
+
context_length: int = Field(
|
237 |
+
default=200000, ge=1000, description="Context length"
|
238 |
+
)
|
239 |
+
output_type: str = Field(
|
240 |
+
default="string", description="Output type (string or json)"
|
241 |
+
)
|
242 |
+
streaming_on: bool = Field(
|
243 |
+
default=False, description="Enable streaming"
|
244 |
+
)
|
245 |
+
tags: List[str] = Field(
|
246 |
+
default_factory=list,
|
247 |
+
description="Tags for categorizing the agent",
|
248 |
+
)
|
249 |
+
|
250 |
+
|
251 |
+
class AgentUpdate(BaseModel):
|
252 |
+
"""Model for updating agent configuration."""
|
253 |
+
|
254 |
+
description: Optional[str] = None
|
255 |
+
system_prompt: Optional[str] = None
|
256 |
+
temperature: Optional[float] = 0.5
|
257 |
+
max_loops: Optional[int] = 1
|
258 |
+
tags: Optional[List[str]] = None
|
259 |
+
status: Optional[AgentStatus] = None
|
260 |
+
|
261 |
+
|
262 |
+
class AgentSummary(BaseModel):
|
263 |
+
"""Summary model for agent listing."""
|
264 |
+
|
265 |
+
agent_id: UUID
|
266 |
+
agent_name: str
|
267 |
+
description: str
|
268 |
+
created_at: datetime
|
269 |
+
last_used: datetime
|
270 |
+
total_completions: int
|
271 |
+
tags: List[str]
|
272 |
+
status: AgentStatus
|
273 |
+
|
274 |
+
|
275 |
+
class AgentMetrics(BaseModel):
|
276 |
+
"""Model for agent performance metrics."""
|
277 |
+
|
278 |
+
total_completions: int
|
279 |
+
average_response_time: float
|
280 |
+
error_rate: float
|
281 |
+
last_24h_completions: int
|
282 |
+
total_tokens_used: int
|
283 |
+
uptime_percentage: float
|
284 |
+
success_rate: float
|
285 |
+
peak_tokens_per_minute: int
|
286 |
+
|
287 |
+
|
288 |
+
class CompletionRequest(BaseModel):
|
289 |
+
"""Model for completion requests."""
|
290 |
+
|
291 |
+
prompt: str = Field(..., description="The prompt to process")
|
292 |
+
agent_id: UUID = Field(..., description="ID of the agent to use")
|
293 |
+
max_tokens: Optional[int] = Field(
|
294 |
+
None, description="Maximum tokens to generate"
|
295 |
+
)
|
296 |
+
temperature_override: Optional[float] = 0.5
|
297 |
+
stream: bool = Field(
|
298 |
+
default=False, description="Enable streaming response"
|
299 |
+
)
|
300 |
+
|
301 |
+
|
302 |
+
class CompletionResponse(BaseModel):
|
303 |
+
"""Model for completion responses."""
|
304 |
+
|
305 |
+
agent_id: UUID
|
306 |
+
response: str
|
307 |
+
metadata: Dict[str, Any]
|
308 |
+
timestamp: datetime
|
309 |
+
processing_time: float
|
310 |
+
token_usage: Dict[str, int]
|
311 |
+
|
312 |
+
|
313 |
+
class AgentStore:
|
314 |
+
"""Enhanced store for managing agents."""
|
315 |
+
|
316 |
+
def __init__(self):
|
317 |
+
self.agents: Dict[UUID, Agent] = {}
|
318 |
+
self.agent_metadata: Dict[UUID, Dict[str, Any]] = {}
|
319 |
+
self.users: Dict[UUID, User] = {} # user_id -> User
|
320 |
+
self.api_keys: Dict[str, UUID] = {} # api_key -> user_id
|
321 |
+
self.user_agents: Dict[UUID, List[UUID]] = (
|
322 |
+
{}
|
323 |
+
) # user_id -> [agent_ids]
|
324 |
+
self.executor = ThreadPoolExecutor(max_workers=4)
|
325 |
+
self.total_requests = Value(
|
326 |
+
"i", 0
|
327 |
+
) # Shared counter for total requests
|
328 |
+
self._ensure_directories()
|
329 |
+
|
330 |
+
def increment_request_count(self):
|
331 |
+
"""Increment the total request counter."""
|
332 |
+
with self.total_requests.get_lock():
|
333 |
+
self.total_requests.value += 1
|
334 |
+
|
335 |
+
def get_total_requests(self) -> int:
|
336 |
+
"""Get the total number of requests processed."""
|
337 |
+
return self.total_requests.value
|
338 |
+
|
339 |
+
def _ensure_directories(self):
|
340 |
+
"""Ensure required directories exist."""
|
341 |
+
Path("logs").mkdir(exist_ok=True)
|
342 |
+
Path("states").mkdir(exist_ok=True)
|
343 |
+
|
344 |
+
def create_api_key(self, user_id: UUID, key_name: str) -> APIKey:
|
345 |
+
"""Create a new API key for a user."""
|
346 |
+
if user_id not in self.users:
|
347 |
+
raise HTTPException(
|
348 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
349 |
+
detail="User not found",
|
350 |
+
)
|
351 |
+
|
352 |
+
# Generate a secure random API key
|
353 |
+
api_key = secrets.token_urlsafe(API_KEY_LENGTH)
|
354 |
+
|
355 |
+
# Create the API key object
|
356 |
+
key_object = APIKey(
|
357 |
+
key=api_key,
|
358 |
+
name=key_name,
|
359 |
+
created_at=datetime.utcnow(),
|
360 |
+
last_used=datetime.utcnow(),
|
361 |
+
)
|
362 |
+
|
363 |
+
# Store the API key
|
364 |
+
self.users[user_id].api_keys[api_key] = key_object
|
365 |
+
self.api_keys[api_key] = user_id
|
366 |
+
|
367 |
+
return key_object
|
368 |
+
|
369 |
+
async def verify_agent_access(
|
370 |
+
self, agent_id: UUID, user_id: UUID
|
371 |
+
) -> bool:
|
372 |
+
"""Verify if a user has access to an agent."""
|
373 |
+
if agent_id not in self.agents:
|
374 |
+
return False
|
375 |
+
return (
|
376 |
+
self.agent_metadata[agent_id]["owner_id"] == user_id
|
377 |
+
or self.users[user_id].is_admin
|
378 |
+
)
|
379 |
+
|
380 |
+
def validate_api_key(self, api_key: str) -> Optional[UUID]:
|
381 |
+
"""Validate an API key and return the associated user ID."""
|
382 |
+
user_id = self.api_keys.get(api_key)
|
383 |
+
if not user_id or api_key not in self.users[user_id].api_keys:
|
384 |
+
return None
|
385 |
+
|
386 |
+
key_object = self.users[user_id].api_keys[api_key]
|
387 |
+
if not key_object.is_active:
|
388 |
+
return None
|
389 |
+
|
390 |
+
# Update last used timestamp
|
391 |
+
key_object.last_used = datetime.utcnow()
|
392 |
+
return user_id
|
393 |
+
|
394 |
+
async def create_agent(
|
395 |
+
self, config: AgentConfig, user_id: UUID
|
396 |
+
) -> UUID:
|
397 |
+
"""Create a new agent with the given configuration."""
|
398 |
+
try:
|
399 |
+
|
400 |
+
agent = Agent(
|
401 |
+
agent_name=config.agent_name,
|
402 |
+
system_prompt=config.system_prompt,
|
403 |
+
model_name=config.model_name,
|
404 |
+
max_loops=config.max_loops,
|
405 |
+
autosave=config.autosave,
|
406 |
+
dashboard=config.dashboard,
|
407 |
+
verbose=config.verbose,
|
408 |
+
dynamic_temperature_enabled=True,
|
409 |
+
saved_state_path=f"states/{config.agent_name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
410 |
+
user_name=config.user_name,
|
411 |
+
retry_attempts=config.retry_attempts,
|
412 |
+
context_length=config.context_length,
|
413 |
+
return_step_meta=True,
|
414 |
+
output_type="str",
|
415 |
+
streaming_on=config.streaming_on,
|
416 |
+
)
|
417 |
+
|
418 |
+
agent_id = uuid4()
|
419 |
+
self.agents[agent_id] = agent
|
420 |
+
self.agent_metadata[agent_id] = {
|
421 |
+
"description": config.description,
|
422 |
+
"created_at": datetime.utcnow(),
|
423 |
+
"last_used": datetime.utcnow(),
|
424 |
+
"total_completions": 0,
|
425 |
+
"tags": config.tags,
|
426 |
+
"total_tokens": 0,
|
427 |
+
"error_count": 0,
|
428 |
+
"response_times": [],
|
429 |
+
"status": AgentStatus.IDLE,
|
430 |
+
"start_time": datetime.utcnow(),
|
431 |
+
"downtime": timedelta(),
|
432 |
+
"successful_completions": 0,
|
433 |
+
}
|
434 |
+
|
435 |
+
# Add to user's agents list
|
436 |
+
if user_id not in self.user_agents:
|
437 |
+
self.user_agents[user_id] = []
|
438 |
+
self.user_agents[user_id].append(agent_id)
|
439 |
+
|
440 |
+
return agent_id
|
441 |
+
|
442 |
+
except Exception as e:
|
443 |
+
logger.error(f"Error creating agent: {str(e)}")
|
444 |
+
raise HTTPException(
|
445 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
446 |
+
detail=f"Failed to create agent: {str(e)}",
|
447 |
+
)
|
448 |
+
|
449 |
+
async def get_agent(self, agent_id: UUID) -> Agent:
|
450 |
+
"""Retrieve an agent by ID."""
|
451 |
+
agent = self.agents.get(agent_id)
|
452 |
+
if not agent:
|
453 |
+
logger.error(f"Agent not found: {agent_id}")
|
454 |
+
raise HTTPException(
|
455 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
456 |
+
detail=f"Agent {agent_id} not found",
|
457 |
+
)
|
458 |
+
return agent
|
459 |
+
|
460 |
+
async def update_agent(
|
461 |
+
self, agent_id: UUID, update: AgentUpdate
|
462 |
+
) -> None:
|
463 |
+
"""Update agent configuration."""
|
464 |
+
agent = await self.get_agent(agent_id)
|
465 |
+
metadata = self.agent_metadata[agent_id]
|
466 |
+
|
467 |
+
if update.system_prompt:
|
468 |
+
agent.system_prompt = update.system_prompt
|
469 |
+
if update.max_loops is not None:
|
470 |
+
agent.max_loops = update.max_loops
|
471 |
+
if update.tags is not None:
|
472 |
+
metadata["tags"] = update.tags
|
473 |
+
if update.description is not None:
|
474 |
+
metadata["description"] = update.description
|
475 |
+
if update.status is not None:
|
476 |
+
metadata["status"] = update.status
|
477 |
+
if update.status == AgentStatus.MAINTENANCE:
|
478 |
+
metadata["downtime"] += (
|
479 |
+
datetime.utcnow() - metadata["last_used"]
|
480 |
+
)
|
481 |
+
|
482 |
+
logger.info(f"Updated agent {agent_id}")
|
483 |
+
|
484 |
+
async def list_agents(
|
485 |
+
self,
|
486 |
+
tags: Optional[List[str]] = None,
|
487 |
+
status: Optional[AgentStatus] = None,
|
488 |
+
) -> List[AgentSummary]:
|
489 |
+
"""List all agents, optionally filtered by tags and status."""
|
490 |
+
summaries = []
|
491 |
+
for agent_id, agent in self.agents.items():
|
492 |
+
metadata = self.agent_metadata[agent_id]
|
493 |
+
|
494 |
+
# Apply filters
|
495 |
+
if tags and not any(
|
496 |
+
tag in metadata["tags"] for tag in tags
|
497 |
+
):
|
498 |
+
continue
|
499 |
+
if status and metadata["status"] != status:
|
500 |
+
continue
|
501 |
+
|
502 |
+
summaries.append(
|
503 |
+
AgentSummary(
|
504 |
+
agent_id=agent_id,
|
505 |
+
agent_name=agent.agent_name,
|
506 |
+
description=metadata["description"],
|
507 |
+
created_at=metadata["created_at"],
|
508 |
+
last_used=metadata["last_used"],
|
509 |
+
total_completions=metadata["total_completions"],
|
510 |
+
tags=metadata["tags"],
|
511 |
+
status=metadata["status"],
|
512 |
+
)
|
513 |
+
)
|
514 |
+
return summaries
|
515 |
+
|
516 |
+
async def get_agent_metrics(self, agent_id: UUID) -> AgentMetrics:
|
517 |
+
"""Get performance metrics for an agent."""
|
518 |
+
metadata = self.agent_metadata[agent_id]
|
519 |
+
response_times = metadata["response_times"]
|
520 |
+
|
521 |
+
# Calculate metrics
|
522 |
+
total_time = datetime.utcnow() - metadata["start_time"]
|
523 |
+
uptime = total_time - metadata["downtime"]
|
524 |
+
uptime_percentage = (
|
525 |
+
uptime.total_seconds() / total_time.total_seconds()
|
526 |
+
) * 100
|
527 |
+
|
528 |
+
success_rate = (
|
529 |
+
metadata["successful_completions"]
|
530 |
+
/ metadata["total_completions"]
|
531 |
+
* 100
|
532 |
+
if metadata["total_completions"] > 0
|
533 |
+
else 0
|
534 |
+
)
|
535 |
+
|
536 |
+
return AgentMetrics(
|
537 |
+
total_completions=metadata["total_completions"],
|
538 |
+
average_response_time=(
|
539 |
+
sum(response_times) / len(response_times)
|
540 |
+
if response_times
|
541 |
+
else 0
|
542 |
+
),
|
543 |
+
error_rate=(
|
544 |
+
metadata["error_count"]
|
545 |
+
/ metadata["total_completions"]
|
546 |
+
if metadata["total_completions"] > 0
|
547 |
+
else 0
|
548 |
+
),
|
549 |
+
last_24h_completions=sum(
|
550 |
+
1
|
551 |
+
for t in response_times
|
552 |
+
if (datetime.utcnow() - t).days < 1
|
553 |
+
),
|
554 |
+
total_tokens_used=metadata["total_tokens"],
|
555 |
+
uptime_percentage=uptime_percentage,
|
556 |
+
success_rate=success_rate,
|
557 |
+
peak_tokens_per_minute=max(
|
558 |
+
metadata.get("tokens_per_minute", [0])
|
559 |
+
),
|
560 |
+
)
|
561 |
+
|
562 |
+
async def clone_agent(
|
563 |
+
self, agent_id: UUID, new_name: str
|
564 |
+
) -> UUID:
|
565 |
+
"""Clone an existing agent with a new name."""
|
566 |
+
original_agent = await self.get_agent(agent_id)
|
567 |
+
original_metadata = self.agent_metadata[agent_id]
|
568 |
+
|
569 |
+
config = AgentConfig(
|
570 |
+
agent_name=new_name,
|
571 |
+
description=f"Clone of {original_agent.agent_name}",
|
572 |
+
system_prompt=original_agent.system_prompt,
|
573 |
+
model_name=original_agent.model_name,
|
574 |
+
temperature=0.5,
|
575 |
+
max_loops=original_agent.max_loops,
|
576 |
+
tags=original_metadata["tags"],
|
577 |
+
)
|
578 |
+
|
579 |
+
return await self.create_agent(config)
|
580 |
+
|
581 |
+
async def delete_agent(self, agent_id: UUID) -> None:
|
582 |
+
"""Delete an agent."""
|
583 |
+
if agent_id not in self.agents:
|
584 |
+
raise HTTPException(
|
585 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
586 |
+
detail=f"Agent {agent_id} not found",
|
587 |
+
)
|
588 |
+
|
589 |
+
# Clean up any resources
|
590 |
+
agent = self.agents[agent_id]
|
591 |
+
if agent.autosave and os.path.exists(agent.saved_state_path):
|
592 |
+
os.remove(agent.saved_state_path)
|
593 |
+
|
594 |
+
del self.agents[agent_id]
|
595 |
+
del self.agent_metadata[agent_id]
|
596 |
+
logger.info(f"Deleted agent {agent_id}")
|
597 |
+
|
598 |
+
async def process_completion(
|
599 |
+
self,
|
600 |
+
agent: Agent,
|
601 |
+
prompt: str,
|
602 |
+
agent_id: UUID,
|
603 |
+
max_tokens: Optional[int] = None,
|
604 |
+
temperature_override: Optional[float] = None,
|
605 |
+
) -> CompletionResponse:
|
606 |
+
"""Process a completion request using the specified agent."""
|
607 |
+
start_time = datetime.utcnow()
|
608 |
+
metadata = self.agent_metadata[agent_id]
|
609 |
+
|
610 |
+
try:
|
611 |
+
# Update agent status
|
612 |
+
metadata["status"] = AgentStatus.PROCESSING
|
613 |
+
metadata["last_used"] = start_time
|
614 |
+
|
615 |
+
# Process the completion
|
616 |
+
response = agent.run(prompt)
|
617 |
+
|
618 |
+
# Update metrics
|
619 |
+
processing_time = (
|
620 |
+
datetime.utcnow() - start_time
|
621 |
+
).total_seconds()
|
622 |
+
metadata["response_times"].append(processing_time)
|
623 |
+
metadata["total_completions"] += 1
|
624 |
+
metadata["successful_completions"] += 1
|
625 |
+
|
626 |
+
# Estimate token usage (this is a rough estimate)
|
627 |
+
prompt_tokens = len(prompt.split()) * 1.3
|
628 |
+
completion_tokens = len(response.split()) * 1.3
|
629 |
+
total_tokens = int(prompt_tokens + completion_tokens)
|
630 |
+
metadata["total_tokens"] += total_tokens
|
631 |
+
|
632 |
+
# Update tokens per minute tracking
|
633 |
+
current_minute = datetime.utcnow().replace(
|
634 |
+
second=0, microsecond=0
|
635 |
+
)
|
636 |
+
if "tokens_per_minute" not in metadata:
|
637 |
+
metadata["tokens_per_minute"] = {}
|
638 |
+
metadata["tokens_per_minute"][current_minute] = (
|
639 |
+
metadata["tokens_per_minute"].get(current_minute, 0)
|
640 |
+
+ total_tokens
|
641 |
+
)
|
642 |
+
|
643 |
+
return CompletionResponse(
|
644 |
+
agent_id=agent_id,
|
645 |
+
response=response,
|
646 |
+
metadata={
|
647 |
+
"agent_name": agent.agent_name,
|
648 |
+
# "model_name": agent.llm.model_name,
|
649 |
+
# "temperature": 0.5,
|
650 |
+
},
|
651 |
+
timestamp=datetime.utcnow(),
|
652 |
+
processing_time=processing_time,
|
653 |
+
token_usage={
|
654 |
+
"prompt_tokens": int(prompt_tokens),
|
655 |
+
"completion_tokens": int(completion_tokens),
|
656 |
+
"total_tokens": total_tokens,
|
657 |
+
},
|
658 |
+
)
|
659 |
+
|
660 |
+
except Exception as e:
|
661 |
+
metadata["error_count"] += 1
|
662 |
+
metadata["status"] = AgentStatus.ERROR
|
663 |
+
logger.error(
|
664 |
+
f"Error in completion processing: {str(e)}\n{traceback.format_exc()}"
|
665 |
+
)
|
666 |
+
raise HTTPException(
|
667 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
668 |
+
detail=f"Error processing completion: {str(e)}",
|
669 |
+
)
|
670 |
+
finally:
|
671 |
+
metadata["status"] = AgentStatus.IDLE
|
672 |
+
|
673 |
+
|
674 |
+
class StoreManager:
|
675 |
+
_instance = None
|
676 |
+
|
677 |
+
@classmethod
|
678 |
+
def get_instance(cls) -> "AgentStore":
|
679 |
+
if cls._instance is None:
|
680 |
+
cls._instance = AgentStore()
|
681 |
+
return cls._instance
|
682 |
+
|
683 |
+
|
684 |
+
# Modify the dependency function
|
685 |
+
def get_store() -> AgentStore:
|
686 |
+
"""Dependency to get the AgentStore instance."""
|
687 |
+
return StoreManager.get_instance()
|
688 |
+
|
689 |
+
|
690 |
+
# Security utility function using the new dependency
|
691 |
+
async def get_current_user(
|
692 |
+
api_key: str = Header(
|
693 |
+
..., description="API key for authentication"
|
694 |
+
),
|
695 |
+
store: AgentStore = Depends(get_store),
|
696 |
+
) -> User:
|
697 |
+
"""Validate API key and return current user."""
|
698 |
+
user_id = store.validate_api_key(api_key)
|
699 |
+
if not user_id:
|
700 |
+
raise HTTPException(
|
701 |
+
status_code=status.HTTP_401_UNAUTHORIZED,
|
702 |
+
detail="Invalid or expired API key",
|
703 |
+
headers={"WWW-Authenticate": "ApiKey"},
|
704 |
+
)
|
705 |
+
return store.users[user_id]
|
706 |
+
|
707 |
+
|
708 |
+
class SwarmsAPI:
|
709 |
+
"""Enhanced API class for Swarms agent integration."""
|
710 |
+
|
711 |
+
def __init__(self):
|
712 |
+
self.app = FastAPI(
|
713 |
+
title="Swarms Agent API",
|
714 |
+
description="Production-grade API for Swarms agent interaction",
|
715 |
+
version="1.0.0",
|
716 |
+
docs_url="/v1/docs",
|
717 |
+
redoc_url="/v1/redoc",
|
718 |
+
)
|
719 |
+
# Initialize the store using the singleton manager
|
720 |
+
self.store = StoreManager.get_instance()
|
721 |
+
|
722 |
+
# Configure CORS
|
723 |
+
self.app.add_middleware(
|
724 |
+
CORSMiddleware,
|
725 |
+
allow_origins=[
|
726 |
+
"*"
|
727 |
+
], # Configure appropriately for production
|
728 |
+
allow_credentials=True,
|
729 |
+
allow_methods=["*"],
|
730 |
+
allow_headers=["*"],
|
731 |
+
)
|
732 |
+
|
733 |
+
self._setup_routes()
|
734 |
+
|
735 |
+
def _setup_routes(self):
|
736 |
+
"""Set up API routes."""
|
737 |
+
|
738 |
+
# In your API code
|
739 |
+
@self.app.post("/v1/users", response_model=Dict[str, Any])
|
740 |
+
async def create_user(request: Request):
|
741 |
+
"""Create a new user and initial API key."""
|
742 |
+
try:
|
743 |
+
body = await request.json()
|
744 |
+
username = body.get("username")
|
745 |
+
if not username or len(username) < 3:
|
746 |
+
raise HTTPException(
|
747 |
+
status_code=400, detail="Invalid username"
|
748 |
+
)
|
749 |
+
|
750 |
+
user_id = uuid4()
|
751 |
+
user = User(id=user_id, username=username)
|
752 |
+
self.store.users[user_id] = user
|
753 |
+
initial_key = self.store.create_api_key(
|
754 |
+
user_id, "Initial Key"
|
755 |
+
)
|
756 |
+
return {
|
757 |
+
"user_id": user_id,
|
758 |
+
"api_key": initial_key.key,
|
759 |
+
}
|
760 |
+
except Exception as e:
|
761 |
+
logger.error(f"Error creating user: {str(e)}")
|
762 |
+
raise HTTPException(status_code=400, detail=str(e))
|
763 |
+
|
764 |
+
@self.app.post(
|
765 |
+
"/v1/users/{user_id}/api-keys", response_model=APIKey
|
766 |
+
)
|
767 |
+
async def create_api_key(
|
768 |
+
user_id: UUID,
|
769 |
+
key_create: APIKeyCreate,
|
770 |
+
current_user: User = Depends(get_current_user),
|
771 |
+
):
|
772 |
+
"""Create a new API key for a user."""
|
773 |
+
if (
|
774 |
+
current_user.id != user_id
|
775 |
+
and not current_user.is_admin
|
776 |
+
):
|
777 |
+
raise HTTPException(
|
778 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
779 |
+
detail="Not authorized to create API keys for this user",
|
780 |
+
)
|
781 |
+
|
782 |
+
return self.store.create_api_key(user_id, key_create.name)
|
783 |
+
|
784 |
+
@self.app.get(
|
785 |
+
"/v1/users/{user_id}/api-keys",
|
786 |
+
response_model=List[APIKey],
|
787 |
+
)
|
788 |
+
async def list_api_keys(
|
789 |
+
user_id: UUID,
|
790 |
+
current_user: User = Depends(get_current_user),
|
791 |
+
):
|
792 |
+
"""List all API keys for a user."""
|
793 |
+
if (
|
794 |
+
current_user.id != user_id
|
795 |
+
and not current_user.is_admin
|
796 |
+
):
|
797 |
+
raise HTTPException(
|
798 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
799 |
+
detail="Not authorized to view API keys for this user",
|
800 |
+
)
|
801 |
+
|
802 |
+
return list(self.store.users[user_id].api_keys.values())
|
803 |
+
|
804 |
+
@self.app.delete("/v1/users/{user_id}/api-keys/{key}")
|
805 |
+
async def revoke_api_key(
|
806 |
+
user_id: UUID,
|
807 |
+
key: str,
|
808 |
+
current_user: User = Depends(get_current_user),
|
809 |
+
):
|
810 |
+
"""Revoke an API key."""
|
811 |
+
if (
|
812 |
+
current_user.id != user_id
|
813 |
+
and not current_user.is_admin
|
814 |
+
):
|
815 |
+
raise HTTPException(
|
816 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
817 |
+
detail="Not authorized to revoke API keys for this user",
|
818 |
+
)
|
819 |
+
|
820 |
+
if key in self.store.users[user_id].api_keys:
|
821 |
+
self.store.users[user_id].api_keys[
|
822 |
+
key
|
823 |
+
].is_active = False
|
824 |
+
del self.store.api_keys[key]
|
825 |
+
return {"status": "API key revoked"}
|
826 |
+
|
827 |
+
raise HTTPException(
|
828 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
829 |
+
detail="API key not found",
|
830 |
+
)
|
831 |
+
|
832 |
+
@self.app.get(
|
833 |
+
"/v1/users/me/agents", response_model=List[AgentSummary]
|
834 |
+
)
|
835 |
+
async def list_user_agents(
|
836 |
+
current_user: User = Depends(get_current_user),
|
837 |
+
tags: Optional[List[str]] = Query(None),
|
838 |
+
status: Optional[AgentStatus] = None,
|
839 |
+
):
|
840 |
+
"""List all agents owned by the current user."""
|
841 |
+
user_agents = self.store.user_agents.get(
|
842 |
+
current_user.id, []
|
843 |
+
)
|
844 |
+
return [
|
845 |
+
agent
|
846 |
+
for agent in await self.store.list_agents(
|
847 |
+
tags, status
|
848 |
+
)
|
849 |
+
if agent.agent_id in user_agents
|
850 |
+
]
|
851 |
+
|
852 |
+
@self.app.middleware("http")
|
853 |
+
async def count_requests(request: Request, call_next):
|
854 |
+
"""Middleware to count all incoming requests."""
|
855 |
+
self.store.increment_request_count()
|
856 |
+
response = await call_next(request)
|
857 |
+
return response
|
858 |
+
|
859 |
+
# Modify existing routes to use API key authentication
|
860 |
+
@self.app.post("/v1/agent", response_model=Dict[str, UUID])
|
861 |
+
async def create_agent(
|
862 |
+
config: AgentConfig,
|
863 |
+
current_user: User = Depends(get_current_user),
|
864 |
+
):
|
865 |
+
"""Create a new agent with the specified configuration."""
|
866 |
+
agent_id = await self.store.create_agent(
|
867 |
+
config, current_user.id
|
868 |
+
)
|
869 |
+
return {"agent_id": agent_id}
|
870 |
+
|
871 |
+
@self.app.get("/v1/agents", response_model=List[AgentSummary])
|
872 |
+
async def list_agents(
|
873 |
+
tags: Optional[List[str]] = Query(None),
|
874 |
+
status: Optional[AgentStatus] = None,
|
875 |
+
):
|
876 |
+
"""List all agents, optionally filtered by tags and status."""
|
877 |
+
return await self.store.list_agents(tags, status)
|
878 |
+
|
879 |
+
@self.app.patch(
|
880 |
+
"/v1/agent/{agent_id}", response_model=Dict[str, str]
|
881 |
+
)
|
882 |
+
async def update_agent(agent_id: UUID, update: AgentUpdate):
|
883 |
+
"""Update an existing agent's configuration."""
|
884 |
+
await self.store.update_agent(agent_id, update)
|
885 |
+
return {"status": "updated"}
|
886 |
+
|
887 |
+
@self.app.get(
|
888 |
+
"/v1/agent/{agent_id}/metrics",
|
889 |
+
response_model=AgentMetrics,
|
890 |
+
)
|
891 |
+
async def get_agent_metrics(agent_id: UUID):
|
892 |
+
"""Get performance metrics for a specific agent."""
|
893 |
+
return await self.store.get_agent_metrics(agent_id)
|
894 |
+
|
895 |
+
@self.app.post(
|
896 |
+
"/v1/agent/{agent_id}/clone",
|
897 |
+
response_model=Dict[str, UUID],
|
898 |
+
)
|
899 |
+
async def clone_agent(agent_id: UUID, new_name: str):
|
900 |
+
"""Clone an existing agent with a new name."""
|
901 |
+
new_id = await self.store.clone_agent(agent_id, new_name)
|
902 |
+
return {"agent_id": new_id}
|
903 |
+
|
904 |
+
@self.app.delete("/v1/agent/{agent_id}")
|
905 |
+
async def delete_agent(agent_id: UUID):
|
906 |
+
"""Delete an agent."""
|
907 |
+
await self.store.delete_agent(agent_id)
|
908 |
+
return {"status": "deleted"}
|
909 |
+
|
910 |
+
@self.app.post(
|
911 |
+
"/v1/agent/completions", response_model=CompletionResponse
|
912 |
+
)
|
913 |
+
async def create_completion(
|
914 |
+
request: CompletionRequest,
|
915 |
+
background_tasks: BackgroundTasks,
|
916 |
+
):
|
917 |
+
"""Process a completion request with the specified agent."""
|
918 |
+
try:
|
919 |
+
agent = await self.store.get_agent(request.agent_id)
|
920 |
+
|
921 |
+
# Process completion
|
922 |
+
response = await self.store.process_completion(
|
923 |
+
agent,
|
924 |
+
request.prompt,
|
925 |
+
request.agent_id,
|
926 |
+
request.max_tokens,
|
927 |
+
0.5,
|
928 |
+
)
|
929 |
+
|
930 |
+
# Schedule background cleanup
|
931 |
+
background_tasks.add_task(
|
932 |
+
self._cleanup_old_metrics, request.agent_id
|
933 |
+
)
|
934 |
+
|
935 |
+
return response
|
936 |
+
|
937 |
+
except Exception as e:
|
938 |
+
logger.error(f"Error processing completion: {str(e)}")
|
939 |
+
raise HTTPException(
|
940 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
941 |
+
detail=f"Error processing completion: {str(e)}",
|
942 |
+
)
|
943 |
+
|
944 |
+
@self.app.get("/v1/agent/{agent_id}/status")
|
945 |
+
async def get_agent_status(agent_id: UUID):
|
946 |
+
"""Get the current status of an agent."""
|
947 |
+
metadata = self.store.agent_metadata.get(agent_id)
|
948 |
+
if not metadata:
|
949 |
+
raise HTTPException(
|
950 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
951 |
+
detail=f"Agent {agent_id} not found",
|
952 |
+
)
|
953 |
+
return {
|
954 |
+
"agent_id": agent_id,
|
955 |
+
"status": metadata["status"],
|
956 |
+
"last_used": metadata["last_used"],
|
957 |
+
"total_completions": metadata["total_completions"],
|
958 |
+
"error_count": metadata["error_count"],
|
959 |
+
}
|
960 |
+
|
961 |
+
async def _cleanup_old_metrics(self, agent_id: UUID):
|
962 |
+
"""Clean up old metrics data to prevent memory bloat."""
|
963 |
+
metadata = self.store.agent_metadata.get(agent_id)
|
964 |
+
if metadata:
|
965 |
+
# Keep only last 24 hours of response times
|
966 |
+
cutoff = datetime.utcnow() - timedelta(days=1)
|
967 |
+
metadata["response_times"] = [
|
968 |
+
t
|
969 |
+
for t in metadata["response_times"]
|
970 |
+
if isinstance(t, (int, float))
|
971 |
+
and t > cutoff.timestamp()
|
972 |
+
]
|
973 |
+
|
974 |
+
# Clean up old tokens per minute data
|
975 |
+
if "tokens_per_minute" in metadata:
|
976 |
+
metadata["tokens_per_minute"] = {
|
977 |
+
k: v
|
978 |
+
for k, v in metadata["tokens_per_minute"].items()
|
979 |
+
if k > cutoff
|
980 |
+
}
|
981 |
+
|
982 |
+
|
983 |
+
def run_api_instance(
|
984 |
+
port: int, heartbeat_queue: Queue, shutdown_event: any
|
985 |
+
):
|
986 |
+
"""Run a single API instance and report metrics."""
|
987 |
+
try:
|
988 |
+
# Initialize API
|
989 |
+
api = SwarmsAPI()
|
990 |
+
process = psutil.Process()
|
991 |
+
|
992 |
+
# Start metrics reporting
|
993 |
+
def report_metrics():
|
994 |
+
while not shutdown_event.is_set():
|
995 |
+
try:
|
996 |
+
cpu_percent = process.cpu_percent()
|
997 |
+
memory_percent = process.memory_percent()
|
998 |
+
heartbeat_queue.put(
|
999 |
+
(
|
1000 |
+
process.pid,
|
1001 |
+
cpu_percent,
|
1002 |
+
memory_percent,
|
1003 |
+
api.store.get_total_requests(),
|
1004 |
+
)
|
1005 |
+
)
|
1006 |
+
time.sleep(5)
|
1007 |
+
except Exception as e:
|
1008 |
+
logger.error(f"Error reporting metrics: {e}")
|
1009 |
+
|
1010 |
+
metrics_thread = threading.Thread(target=report_metrics)
|
1011 |
+
metrics_thread.daemon = True
|
1012 |
+
metrics_thread.start()
|
1013 |
+
|
1014 |
+
# Run API
|
1015 |
+
uvicorn.run(
|
1016 |
+
api.app, host="0.0.0.0", port=port, log_level="info"
|
1017 |
+
)
|
1018 |
+
|
1019 |
+
except Exception as e:
|
1020 |
+
logger.error(f"Error in API instance: {e}")
|
1021 |
+
sys.exit(1)
|
1022 |
+
|
1023 |
+
|
1024 |
+
class MultiProcessManager:
|
1025 |
+
"""Manages multiple API processes."""
|
1026 |
+
|
1027 |
+
def __init__(
|
1028 |
+
self, base_port: int = 8000, num_processes: int = None
|
1029 |
+
):
|
1030 |
+
self.base_port = base_port
|
1031 |
+
self.num_processes = (
|
1032 |
+
num_processes or multiprocessing.cpu_count()
|
1033 |
+
)
|
1034 |
+
self.processes: Dict[int, Process] = {}
|
1035 |
+
self.metrics: Dict[int, ProcessMetrics] = {}
|
1036 |
+
self.active = Value("b", True)
|
1037 |
+
|
1038 |
+
def start_process(self, port: int) -> Process:
|
1039 |
+
"""Start a single API process."""
|
1040 |
+
process = Process(target=run_api_instance, args=(port,))
|
1041 |
+
process.start()
|
1042 |
+
self.metrics[process.pid] = ProcessMetrics(process.pid, port)
|
1043 |
+
self.processes[process.pid] = process
|
1044 |
+
return process
|
1045 |
+
|
1046 |
+
def monitor_processes(self):
|
1047 |
+
"""Monitor process health and metrics."""
|
1048 |
+
while self.active.value:
|
1049 |
+
for pid, metrics in list(self.metrics.items()):
|
1050 |
+
try:
|
1051 |
+
# Update process metrics
|
1052 |
+
process = psutil.Process(pid)
|
1053 |
+
metrics.cpu_usage = process.cpu_percent()
|
1054 |
+
metrics.memory_usage = process.memory_percent()
|
1055 |
+
metrics.last_heartbeat = time.time()
|
1056 |
+
except psutil.NoSuchProcess:
|
1057 |
+
# Restart dead process
|
1058 |
+
logger.warning(
|
1059 |
+
f"Process {pid} died, restarting..."
|
1060 |
+
)
|
1061 |
+
if pid in self.processes:
|
1062 |
+
self.processes[pid].terminate()
|
1063 |
+
del self.processes[pid]
|
1064 |
+
self.start_process(metrics.port)
|
1065 |
+
del self.metrics[pid]
|
1066 |
+
time.sleep(5)
|
1067 |
+
|
1068 |
+
def start(self):
|
1069 |
+
"""Start all API processes."""
|
1070 |
+
logger.info(f"Starting {self.num_processes} API processes...")
|
1071 |
+
|
1072 |
+
# Start worker processes
|
1073 |
+
for i in range(self.num_processes):
|
1074 |
+
port = self.base_port + i + 1
|
1075 |
+
self.start_process(port)
|
1076 |
+
|
1077 |
+
# Start monitoring thread
|
1078 |
+
monitor_thread = threading.Thread(
|
1079 |
+
target=self.monitor_processes
|
1080 |
+
)
|
1081 |
+
monitor_thread.daemon = True
|
1082 |
+
monitor_thread.start()
|
1083 |
+
|
1084 |
+
logger.info("All processes started successfully")
|
1085 |
+
|
1086 |
+
def shutdown(self):
|
1087 |
+
"""Shutdown all processes."""
|
1088 |
+
self.active.value = False
|
1089 |
+
for process in self.processes.values():
|
1090 |
+
process.terminate()
|
1091 |
+
process.join()
|
1092 |
+
|
1093 |
+
|
1094 |
+
def create_app() -> FastAPI:
|
1095 |
+
"""Create and configure the FastAPI application."""
|
1096 |
+
logger.info("Creating FastAPI application")
|
1097 |
+
api = SwarmsAPI()
|
1098 |
+
app = api.app
|
1099 |
+
logger.info("FastAPI application created successfully")
|
1100 |
+
return app
|
1101 |
+
|
1102 |
+
|
1103 |
+
class LoadBalancer:
|
1104 |
+
"""Load balancer for distributing requests across API instances."""
|
1105 |
+
|
1106 |
+
def __init__(self, process_manager: ProcessManager):
|
1107 |
+
self.process_manager = process_manager
|
1108 |
+
self.last_selected_pid = None
|
1109 |
+
self._lock = Lock()
|
1110 |
+
|
1111 |
+
def get_best_instance(self) -> Tuple[int, int]:
|
1112 |
+
"""Select the best instance to handle the next request based on load."""
|
1113 |
+
with self.process_manager.metrics_lock:
|
1114 |
+
valid_instances = [
|
1115 |
+
(pid, metrics)
|
1116 |
+
for pid, metrics in self.process_manager.metrics.items()
|
1117 |
+
if time.time() - metrics.last_heartbeat < 30
|
1118 |
+
]
|
1119 |
+
|
1120 |
+
if not valid_instances:
|
1121 |
+
raise RuntimeError(
|
1122 |
+
"No healthy API instances available"
|
1123 |
+
)
|
1124 |
+
|
1125 |
+
# Calculate load score for each instance
|
1126 |
+
scores = []
|
1127 |
+
for pid, metrics in valid_instances:
|
1128 |
+
cpu_score = metrics.cpu_usage / 100.0
|
1129 |
+
memory_score = metrics.memory_usage / 100.0
|
1130 |
+
request_score = (
|
1131 |
+
metrics.request_count / 1000.0
|
1132 |
+
) # Normalize request count
|
1133 |
+
total_score = (
|
1134 |
+
cpu_score + memory_score + request_score
|
1135 |
+
) / 3
|
1136 |
+
scores.append((pid, metrics.port, total_score))
|
1137 |
+
|
1138 |
+
# Select instance with lowest load score
|
1139 |
+
selected_pid, selected_port, _ = min(
|
1140 |
+
scores, key=lambda x: x[2]
|
1141 |
+
)
|
1142 |
+
return selected_pid, selected_port
|
1143 |
+
|
1144 |
+
|
1145 |
+
class LoadBalancedAPI(SwarmsAPI):
|
1146 |
+
"""Enhanced API class with load balancing capabilities."""
|
1147 |
+
|
1148 |
+
def __init__(
|
1149 |
+
self,
|
1150 |
+
process_manager: ProcessManager,
|
1151 |
+
load_balancer: LoadBalancer,
|
1152 |
+
):
|
1153 |
+
super().__init__()
|
1154 |
+
self.process_manager = process_manager
|
1155 |
+
self.load_balancer = load_balancer
|
1156 |
+
self.request_count = Value("i", 0)
|
1157 |
+
self.add_middleware()
|
1158 |
+
|
1159 |
+
def add_middleware(self):
|
1160 |
+
"""Add middleware for request routing and metrics collection."""
|
1161 |
+
|
1162 |
+
@self.app.middleware("http")
|
1163 |
+
async def route_request(request: Request, call_next):
|
1164 |
+
try:
|
1165 |
+
# Increment request count
|
1166 |
+
with self.request_count.get_lock():
|
1167 |
+
self.request_count.value += 1
|
1168 |
+
|
1169 |
+
# Get best instance for processing
|
1170 |
+
pid, port = self.load_balancer.get_best_instance()
|
1171 |
+
|
1172 |
+
# Forward request if not already on the best instance
|
1173 |
+
if request.url.port != port:
|
1174 |
+
async with httpx.AsyncClient() as client:
|
1175 |
+
forwarded_url = f"http://localhost:{port}{request.url.path}"
|
1176 |
+
response = await client.request(
|
1177 |
+
request.method,
|
1178 |
+
forwarded_url,
|
1179 |
+
headers=dict(request.headers),
|
1180 |
+
content=await request.body(),
|
1181 |
+
)
|
1182 |
+
return httpx.Response(
|
1183 |
+
content=response.content,
|
1184 |
+
status_code=response.status_code,
|
1185 |
+
headers=dict(response.headers),
|
1186 |
+
)
|
1187 |
+
|
1188 |
+
# Process request locally if already on the best instance
|
1189 |
+
response = await call_next(request)
|
1190 |
+
return response
|
1191 |
+
|
1192 |
+
except Exception as e:
|
1193 |
+
logger.error(f"Error routing request: {e}")
|
1194 |
+
raise HTTPException(
|
1195 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
1196 |
+
detail=str(e),
|
1197 |
+
)
|
1198 |
+
|
1199 |
+
|
1200 |
+
def run_worker(port: int):
|
1201 |
+
"""Run a single worker instance."""
|
1202 |
+
try:
|
1203 |
+
api = SwarmsAPI()
|
1204 |
+
uvicorn.run(
|
1205 |
+
api.app, host="0.0.0.0", port=port, log_level="info"
|
1206 |
+
)
|
1207 |
+
logger.info(f"Worker started on port {port}")
|
1208 |
+
except Exception as e:
|
1209 |
+
logger.error(f"Worker error: {e}")
|
1210 |
+
|
1211 |
+
|
1212 |
+
def main():
|
1213 |
+
"""Main entry point for the multi-process API."""
|
1214 |
+
# Initialize processes list before any potential exceptions
|
1215 |
+
processes = []
|
1216 |
+
|
1217 |
+
try:
|
1218 |
+
# Try to get current method, only set if not already set
|
1219 |
+
try:
|
1220 |
+
current_method = multiprocessing.get_start_method()
|
1221 |
+
logger.info(
|
1222 |
+
f"Using existing start method: {current_method}"
|
1223 |
+
)
|
1224 |
+
except RuntimeError:
|
1225 |
+
try:
|
1226 |
+
multiprocessing.set_start_method("fork")
|
1227 |
+
logger.info("Set start method to fork")
|
1228 |
+
except RuntimeError:
|
1229 |
+
logger.warning("Using default start method")
|
1230 |
+
|
1231 |
+
# Calculate number of workers
|
1232 |
+
num_workers = max(1, multiprocessing.cpu_count() - 1)
|
1233 |
+
base_port = 8000
|
1234 |
+
|
1235 |
+
# Start worker processes
|
1236 |
+
for i in range(num_workers):
|
1237 |
+
port = base_port + i + 1
|
1238 |
+
process = Process(target=run_worker, args=(port,))
|
1239 |
+
process.start()
|
1240 |
+
processes.append(process)
|
1241 |
+
logger.info(f"Started worker on port {port}")
|
1242 |
+
|
1243 |
+
# Run main instance
|
1244 |
+
api = SwarmsAPI()
|
1245 |
+
|
1246 |
+
def shutdown_handler(signum, frame):
|
1247 |
+
logger.info("Shutting down workers...")
|
1248 |
+
for p in processes:
|
1249 |
+
try:
|
1250 |
+
p.terminate()
|
1251 |
+
p.join(timeout=5)
|
1252 |
+
logger.info(f"Worker {p.pid} terminated")
|
1253 |
+
except Exception as e:
|
1254 |
+
logger.error(f"Error shutting down worker: {e}")
|
1255 |
+
sys.exit(0)
|
1256 |
+
|
1257 |
+
signal.signal(signal.SIGINT, shutdown_handler)
|
1258 |
+
signal.signal(signal.SIGTERM, shutdown_handler)
|
1259 |
+
|
1260 |
+
# Run main instance
|
1261 |
+
uvicorn.run(
|
1262 |
+
api.app, host="0.0.0.0", port=base_port, log_level="info"
|
1263 |
+
)
|
1264 |
+
logger.info(f"Main instance started on port {base_port}")
|
1265 |
+
|
1266 |
+
except Exception as e:
|
1267 |
+
logger.error(f"Startup error: {e}")
|
1268 |
+
# Clean up any started processes
|
1269 |
+
for p in processes:
|
1270 |
+
try:
|
1271 |
+
p.terminate()
|
1272 |
+
p.join(timeout=5)
|
1273 |
+
logger.info(
|
1274 |
+
f"Worker {p.pid} terminated during cleanup"
|
1275 |
+
)
|
1276 |
+
except Exception as cleanup_error:
|
1277 |
+
logger.error(f"Error during cleanup: {cleanup_error}")
|
1278 |
+
sys.exit(1)
|
1279 |
+
|
1280 |
+
|
1281 |
+
if __name__ == "__main__":
|
1282 |
+
main()
|
api/agent_api_test.py
ADDED
@@ -0,0 +1,291 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import json
|
3 |
+
import logging
|
4 |
+
from typing import Dict, Optional, Any
|
5 |
+
from dataclasses import dataclass
|
6 |
+
import requests
|
7 |
+
import time
|
8 |
+
|
9 |
+
# Set up logging
|
10 |
+
logging.basicConfig(
|
11 |
+
level=logging.INFO,
|
12 |
+
format="%(asctime)s - %(levelname)s - %(message)s",
|
13 |
+
handlers=[
|
14 |
+
logging.FileHandler("api_tests.log"),
|
15 |
+
logging.StreamHandler(),
|
16 |
+
],
|
17 |
+
)
|
18 |
+
logger = logging.getLogger(__name__)
|
19 |
+
|
20 |
+
|
21 |
+
# Configuration
|
22 |
+
@dataclass
|
23 |
+
class TestConfig:
|
24 |
+
"""Test configuration settings"""
|
25 |
+
|
26 |
+
base_url: str
|
27 |
+
timeout: int = 30
|
28 |
+
verify_ssl: bool = True
|
29 |
+
debug: bool = True
|
30 |
+
|
31 |
+
|
32 |
+
# Load config from environment or use defaults
|
33 |
+
config = TestConfig(
|
34 |
+
base_url=os.getenv("API_BASE_URL", "http://0.0.0.0:8000/v1")
|
35 |
+
)
|
36 |
+
|
37 |
+
|
38 |
+
class APIClient:
|
39 |
+
"""API Client for testing"""
|
40 |
+
|
41 |
+
def __init__(self, config: TestConfig):
|
42 |
+
self.config = config
|
43 |
+
self.session = requests.Session()
|
44 |
+
|
45 |
+
def _url(self, path: str) -> str:
|
46 |
+
"""Construct full URL"""
|
47 |
+
return f"{self.config.base_url}/{path.lstrip('/')}"
|
48 |
+
|
49 |
+
def _log_request_details(
|
50 |
+
self, method: str, url: str, headers: Dict, data: Any
|
51 |
+
):
|
52 |
+
"""Log request details for debugging"""
|
53 |
+
logger.info("\nRequest Details:")
|
54 |
+
logger.info(f"Method: {method}")
|
55 |
+
logger.info(f"URL: {url}")
|
56 |
+
logger.info(f"Headers: {json.dumps(headers, indent=2)}")
|
57 |
+
logger.info(
|
58 |
+
f"Data: {json.dumps(data, indent=2) if data else None}"
|
59 |
+
)
|
60 |
+
|
61 |
+
def _log_response_details(self, response: requests.Response):
|
62 |
+
"""Log response details for debugging"""
|
63 |
+
logger.info("\nResponse Details:")
|
64 |
+
logger.info(f"Status Code: {response.status_code}")
|
65 |
+
logger.info(
|
66 |
+
f"Headers: {json.dumps(dict(response.headers), indent=2)}"
|
67 |
+
)
|
68 |
+
try:
|
69 |
+
logger.info(
|
70 |
+
f"Body: {json.dumps(response.json(), indent=2)}"
|
71 |
+
)
|
72 |
+
except Exception:
|
73 |
+
logger.info(f"Body: {response.text}")
|
74 |
+
|
75 |
+
def _request(
|
76 |
+
self,
|
77 |
+
method: str,
|
78 |
+
path: str,
|
79 |
+
headers: Optional[Dict] = None,
|
80 |
+
**kwargs: Any,
|
81 |
+
) -> requests.Response:
|
82 |
+
"""Make HTTP request with config defaults"""
|
83 |
+
url = self._url(path)
|
84 |
+
headers = headers or {}
|
85 |
+
|
86 |
+
if self.config.debug:
|
87 |
+
self._log_request_details(
|
88 |
+
method, url, headers, kwargs.get("json")
|
89 |
+
)
|
90 |
+
|
91 |
+
try:
|
92 |
+
response = self.session.request(
|
93 |
+
method=method,
|
94 |
+
url=url,
|
95 |
+
headers=headers,
|
96 |
+
timeout=self.config.timeout,
|
97 |
+
verify=self.config.verify_ssl,
|
98 |
+
**kwargs,
|
99 |
+
)
|
100 |
+
|
101 |
+
if self.config.debug:
|
102 |
+
self._log_response_details(response)
|
103 |
+
|
104 |
+
if response.status_code >= 400:
|
105 |
+
logger.error(
|
106 |
+
f"Request failed with status {response.status_code}"
|
107 |
+
)
|
108 |
+
logger.error(f"Response: {response.text}")
|
109 |
+
|
110 |
+
response.raise_for_status()
|
111 |
+
return response
|
112 |
+
|
113 |
+
except requests.exceptions.RequestException as e:
|
114 |
+
logger.error(f"Request failed: {str(e)}")
|
115 |
+
if hasattr(e, "response") and e.response is not None:
|
116 |
+
logger.error(f"Error response: {e.response.text}")
|
117 |
+
raise
|
118 |
+
|
119 |
+
|
120 |
+
class TestRunner:
|
121 |
+
"""Test runner with logging and reporting"""
|
122 |
+
|
123 |
+
def __init__(self):
|
124 |
+
self.client = APIClient(config)
|
125 |
+
self.results = {"passed": 0, "failed": 0, "total_time": 0}
|
126 |
+
self.api_key = None
|
127 |
+
self.user_id = None
|
128 |
+
self.agent_id = None
|
129 |
+
|
130 |
+
def run_test(self, test_name: str, test_func: callable):
|
131 |
+
"""Run a single test with timing and logging"""
|
132 |
+
logger.info(f"\nRunning test: {test_name}")
|
133 |
+
start_time = time.time()
|
134 |
+
|
135 |
+
try:
|
136 |
+
test_func()
|
137 |
+
self.results["passed"] += 1
|
138 |
+
logger.info(f"✅ {test_name} - PASSED")
|
139 |
+
except Exception as e:
|
140 |
+
self.results["failed"] += 1
|
141 |
+
logger.error(f"❌ {test_name} - FAILED: {str(e)}")
|
142 |
+
logger.exception(e)
|
143 |
+
|
144 |
+
end_time = time.time()
|
145 |
+
duration = end_time - start_time
|
146 |
+
self.results["total_time"] += duration
|
147 |
+
logger.info(f"Test duration: {duration:.2f}s")
|
148 |
+
|
149 |
+
def test_user_creation(self):
|
150 |
+
"""Test user creation"""
|
151 |
+
response = self.client._request(
|
152 |
+
"POST", "/users", json={"username": "test_user"}
|
153 |
+
)
|
154 |
+
data = response.json()
|
155 |
+
assert "user_id" in data, "No user_id in response"
|
156 |
+
assert "api_key" in data, "No api_key in response"
|
157 |
+
self.api_key = data["api_key"]
|
158 |
+
self.user_id = data["user_id"]
|
159 |
+
logger.info(f"Created user with ID: {self.user_id}")
|
160 |
+
|
161 |
+
def test_create_api_key(self):
|
162 |
+
"""Test API key creation"""
|
163 |
+
headers = {"api-key": self.api_key}
|
164 |
+
response = self.client._request(
|
165 |
+
"POST",
|
166 |
+
f"/users/{self.user_id}/api-keys",
|
167 |
+
headers=headers,
|
168 |
+
json={"name": "test_key"},
|
169 |
+
)
|
170 |
+
data = response.json()
|
171 |
+
assert "key" in data, "No key in response"
|
172 |
+
logger.info("Successfully created new API key")
|
173 |
+
|
174 |
+
def test_create_agent(self):
|
175 |
+
"""Test agent creation"""
|
176 |
+
headers = {"api-key": self.api_key}
|
177 |
+
agent_config = {
|
178 |
+
"agent_name": "test_agent",
|
179 |
+
"model_name": "gpt-4",
|
180 |
+
"system_prompt": "You are a test agent",
|
181 |
+
"description": "Test agent description",
|
182 |
+
"temperature": 0.7,
|
183 |
+
"max_loops": 1,
|
184 |
+
}
|
185 |
+
response = self.client._request(
|
186 |
+
"POST", "/agent", headers=headers, json=agent_config
|
187 |
+
)
|
188 |
+
data = response.json()
|
189 |
+
assert "agent_id" in data, "No agent_id in response"
|
190 |
+
self.agent_id = data["agent_id"]
|
191 |
+
logger.info(f"Created agent with ID: {self.agent_id}")
|
192 |
+
|
193 |
+
# Wait a bit for agent to be ready
|
194 |
+
time.sleep(2)
|
195 |
+
|
196 |
+
def test_list_agents(self):
|
197 |
+
"""Test agent listing"""
|
198 |
+
headers = {"api-key": self.api_key}
|
199 |
+
response = self.client._request(
|
200 |
+
"GET", "/agents", headers=headers
|
201 |
+
)
|
202 |
+
agents = response.json()
|
203 |
+
assert isinstance(agents, list), "Response is not a list"
|
204 |
+
assert len(agents) > 0, "No agents returned"
|
205 |
+
logger.info(f"Successfully retrieved {len(agents)} agents")
|
206 |
+
|
207 |
+
def test_agent_completion(self):
|
208 |
+
"""Test agent completion"""
|
209 |
+
if not self.agent_id:
|
210 |
+
logger.error("No agent_id available for completion test")
|
211 |
+
raise ValueError("Agent ID not set")
|
212 |
+
|
213 |
+
headers = {"api-key": self.api_key}
|
214 |
+
completion_request = {
|
215 |
+
"prompt": "Write 'Hello World!'",
|
216 |
+
"agent_id": str(
|
217 |
+
self.agent_id
|
218 |
+
), # Ensure UUID is converted to string
|
219 |
+
"max_tokens": 100,
|
220 |
+
"stream": False,
|
221 |
+
"temperature_override": 0.7,
|
222 |
+
}
|
223 |
+
|
224 |
+
logger.info(
|
225 |
+
f"Sending completion request for agent {self.agent_id}"
|
226 |
+
)
|
227 |
+
response = self.client._request(
|
228 |
+
"POST",
|
229 |
+
"/agent/completions",
|
230 |
+
headers=headers,
|
231 |
+
json=completion_request,
|
232 |
+
)
|
233 |
+
data = response.json()
|
234 |
+
assert "response" in data, "No response in completion"
|
235 |
+
logger.info(f"Completion response: {data.get('response')}")
|
236 |
+
|
237 |
+
def run_all_tests(self):
|
238 |
+
"""Run all tests and generate report"""
|
239 |
+
logger.info("\n" + "=" * 50)
|
240 |
+
logger.info("Starting API test suite...")
|
241 |
+
logger.info(f"Base URL: {config.base_url}")
|
242 |
+
logger.info("=" * 50 + "\n")
|
243 |
+
|
244 |
+
# Define test sequence
|
245 |
+
tests = [
|
246 |
+
("User Creation", self.test_user_creation),
|
247 |
+
("API Key Creation", self.test_create_api_key),
|
248 |
+
("Agent Creation", self.test_create_agent),
|
249 |
+
("List Agents", self.test_list_agents),
|
250 |
+
("Agent Completion", self.test_agent_completion),
|
251 |
+
]
|
252 |
+
|
253 |
+
# Run tests
|
254 |
+
for test_name, test_func in tests:
|
255 |
+
self.run_test(test_name, test_func)
|
256 |
+
|
257 |
+
# Generate report
|
258 |
+
self.print_report()
|
259 |
+
|
260 |
+
def print_report(self):
|
261 |
+
"""Print test results report"""
|
262 |
+
total_tests = self.results["passed"] + self.results["failed"]
|
263 |
+
success_rate = (
|
264 |
+
(self.results["passed"] / total_tests * 100)
|
265 |
+
if total_tests > 0
|
266 |
+
else 0
|
267 |
+
)
|
268 |
+
|
269 |
+
report = f"""
|
270 |
+
\n{'='*50}
|
271 |
+
API TEST RESULTS
|
272 |
+
{'='*50}
|
273 |
+
Total Tests: {total_tests}
|
274 |
+
Passed: {self.results['passed']} ✅
|
275 |
+
Failed: {self.results['failed']} ❌
|
276 |
+
Success Rate: {success_rate:.2f}%
|
277 |
+
Total Time: {self.results['total_time']:.2f}s
|
278 |
+
{'='*50}
|
279 |
+
"""
|
280 |
+
logger.info(report)
|
281 |
+
|
282 |
+
|
283 |
+
if __name__ == "__main__":
|
284 |
+
try:
|
285 |
+
runner = TestRunner()
|
286 |
+
runner.run_all_tests()
|
287 |
+
except KeyboardInterrupt:
|
288 |
+
logger.info("\nTest suite interrupted by user")
|
289 |
+
except Exception as e:
|
290 |
+
logger.error(f"Test suite failed: {str(e)}")
|
291 |
+
logger.exception(e)
|
api/api_telemetry_draft.txt
ADDED
@@ -0,0 +1,936 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
import secrets
|
3 |
+
import traceback
|
4 |
+
from concurrent.futures import ThreadPoolExecutor
|
5 |
+
from datetime import datetime, timedelta
|
6 |
+
from enum import Enum
|
7 |
+
from pathlib import Path
|
8 |
+
from typing import Any, Dict, List, Optional
|
9 |
+
from uuid import UUID, uuid4
|
10 |
+
|
11 |
+
from opentelemetry import trace
|
12 |
+
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
|
13 |
+
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
|
14 |
+
from opentelemetry.sdk.resources import Resource
|
15 |
+
from opentelemetry.sdk.trace import TracerProvider
|
16 |
+
from opentelemetry.sdk.trace.export import BatchSpanProcessor
|
17 |
+
from opentelemetry.instrumentation.requests import RequestsInstrumentor
|
18 |
+
|
19 |
+
#consider if the following imports need to be added to the main swarms requirements.txt:
|
20 |
+
#opentelemetry-api
|
21 |
+
#opentelemetry-sdk
|
22 |
+
#opentelemetry-instrumentation-fastapi
|
23 |
+
#opentelemetry-instrumentation-requests
|
24 |
+
#opentelemetry-exporter-otlp-proto-grpc
|
25 |
+
|
26 |
+
|
27 |
+
import uvicorn
|
28 |
+
from dotenv import load_dotenv
|
29 |
+
from fastapi import (
|
30 |
+
BackgroundTasks,
|
31 |
+
Depends,
|
32 |
+
FastAPI,
|
33 |
+
Header,
|
34 |
+
HTTPException,
|
35 |
+
Query,
|
36 |
+
Request,
|
37 |
+
status,
|
38 |
+
)
|
39 |
+
from fastapi.middleware.cors import CORSMiddleware
|
40 |
+
from loguru import logger
|
41 |
+
from pydantic import BaseModel, Field
|
42 |
+
|
43 |
+
from swarms.structs.agent import Agent
|
44 |
+
|
45 |
+
OTEL_SERVICE_NAME = os.getenv("OTEL_SERVICE_NAME", "swarms-api")
|
46 |
+
OTEL_EXPORTER_OTLP_ENDPOINT = os.getenv("OTEL_EXPORTER_OTLP_ENDPOINT", "http://aws-otel-collector:4317")
|
47 |
+
|
48 |
+
# Load environment variables
|
49 |
+
load_dotenv()
|
50 |
+
|
51 |
+
|
52 |
+
class AgentStatus(str, Enum):
|
53 |
+
"""Enum for agent status."""
|
54 |
+
|
55 |
+
IDLE = "idle"
|
56 |
+
PROCESSING = "processing"
|
57 |
+
ERROR = "error"
|
58 |
+
MAINTENANCE = "maintenance"
|
59 |
+
|
60 |
+
|
61 |
+
# Security configurations
|
62 |
+
API_KEY_LENGTH = 32 # Length of generated API keys
|
63 |
+
|
64 |
+
|
65 |
+
class APIKey(BaseModel):
|
66 |
+
key: str
|
67 |
+
name: str
|
68 |
+
created_at: datetime
|
69 |
+
last_used: datetime
|
70 |
+
is_active: bool = True
|
71 |
+
|
72 |
+
|
73 |
+
class APIKeyCreate(BaseModel):
|
74 |
+
name: str # A friendly name for the API key
|
75 |
+
|
76 |
+
|
77 |
+
class User(BaseModel):
|
78 |
+
id: UUID
|
79 |
+
username: str
|
80 |
+
is_active: bool = True
|
81 |
+
is_admin: bool = False
|
82 |
+
api_keys: Dict[str, APIKey] = {} # key -> APIKey object
|
83 |
+
|
84 |
+
|
85 |
+
class AgentConfig(BaseModel):
|
86 |
+
"""Configuration model for creating a new agent."""
|
87 |
+
|
88 |
+
agent_name: str = Field(..., description="Name of the agent")
|
89 |
+
model_name: str = Field(
|
90 |
+
...,
|
91 |
+
description="Name of the llm you want to use provided by litellm",
|
92 |
+
)
|
93 |
+
description: str = Field(
|
94 |
+
default="", description="Description of the agent's purpose"
|
95 |
+
)
|
96 |
+
system_prompt: str = Field(
|
97 |
+
..., description="System prompt for the agent"
|
98 |
+
)
|
99 |
+
model_name: str = Field(
|
100 |
+
default="gpt-4", description="Model name to use"
|
101 |
+
)
|
102 |
+
temperature: float = Field(
|
103 |
+
default=0.1,
|
104 |
+
ge=0.0,
|
105 |
+
le=2.0,
|
106 |
+
description="Temperature for the model",
|
107 |
+
)
|
108 |
+
max_loops: int = Field(
|
109 |
+
default=1, ge=1, description="Maximum number of loops"
|
110 |
+
)
|
111 |
+
autosave: bool = Field(
|
112 |
+
default=True, description="Enable autosave"
|
113 |
+
)
|
114 |
+
dashboard: bool = Field(
|
115 |
+
default=False, description="Enable dashboard"
|
116 |
+
)
|
117 |
+
verbose: bool = Field(
|
118 |
+
default=True, description="Enable verbose output"
|
119 |
+
)
|
120 |
+
dynamic_temperature_enabled: bool = Field(
|
121 |
+
default=True, description="Enable dynamic temperature"
|
122 |
+
)
|
123 |
+
user_name: str = Field(
|
124 |
+
default="default_user", description="Username for the agent"
|
125 |
+
)
|
126 |
+
retry_attempts: int = Field(
|
127 |
+
default=1, ge=1, description="Number of retry attempts"
|
128 |
+
)
|
129 |
+
context_length: int = Field(
|
130 |
+
default=200000, ge=1000, description="Context length"
|
131 |
+
)
|
132 |
+
output_type: str = Field(
|
133 |
+
default="string", description="Output type (string or json)"
|
134 |
+
)
|
135 |
+
streaming_on: bool = Field(
|
136 |
+
default=False, description="Enable streaming"
|
137 |
+
)
|
138 |
+
tags: List[str] = Field(
|
139 |
+
default_factory=list,
|
140 |
+
description="Tags for categorizing the agent",
|
141 |
+
)
|
142 |
+
|
143 |
+
|
144 |
+
class AgentUpdate(BaseModel):
|
145 |
+
"""Model for updating agent configuration."""
|
146 |
+
|
147 |
+
description: Optional[str] = None
|
148 |
+
system_prompt: Optional[str] = None
|
149 |
+
temperature: Optional[float] = 0.5
|
150 |
+
max_loops: Optional[int] = 1
|
151 |
+
tags: Optional[List[str]] = None
|
152 |
+
status: Optional[AgentStatus] = None
|
153 |
+
|
154 |
+
|
155 |
+
class AgentSummary(BaseModel):
|
156 |
+
"""Summary model for agent listing."""
|
157 |
+
|
158 |
+
agent_id: UUID
|
159 |
+
agent_name: str
|
160 |
+
description: str
|
161 |
+
created_at: datetime
|
162 |
+
last_used: datetime
|
163 |
+
total_completions: int
|
164 |
+
tags: List[str]
|
165 |
+
status: AgentStatus
|
166 |
+
|
167 |
+
|
168 |
+
class AgentMetrics(BaseModel):
|
169 |
+
"""Model for agent performance metrics."""
|
170 |
+
|
171 |
+
total_completions: int
|
172 |
+
average_response_time: float
|
173 |
+
error_rate: float
|
174 |
+
last_24h_completions: int
|
175 |
+
total_tokens_used: int
|
176 |
+
uptime_percentage: float
|
177 |
+
success_rate: float
|
178 |
+
peak_tokens_per_minute: int
|
179 |
+
|
180 |
+
|
181 |
+
class CompletionRequest(BaseModel):
|
182 |
+
"""Model for completion requests."""
|
183 |
+
|
184 |
+
prompt: str = Field(..., description="The prompt to process")
|
185 |
+
agent_id: UUID = Field(..., description="ID of the agent to use")
|
186 |
+
max_tokens: Optional[int] = Field(
|
187 |
+
None, description="Maximum tokens to generate"
|
188 |
+
)
|
189 |
+
temperature_override: Optional[float] = 0.5
|
190 |
+
stream: bool = Field(
|
191 |
+
default=False, description="Enable streaming response"
|
192 |
+
)
|
193 |
+
|
194 |
+
|
195 |
+
class CompletionResponse(BaseModel):
|
196 |
+
"""Model for completion responses."""
|
197 |
+
|
198 |
+
agent_id: UUID
|
199 |
+
response: str
|
200 |
+
metadata: Dict[str, Any]
|
201 |
+
timestamp: datetime
|
202 |
+
processing_time: float
|
203 |
+
token_usage: Dict[str, int]
|
204 |
+
|
205 |
+
|
206 |
+
class AgentStore:
|
207 |
+
"""Enhanced store for managing agents."""
|
208 |
+
|
209 |
+
def __init__(self):
|
210 |
+
self.agents: Dict[UUID, Agent] = {}
|
211 |
+
self.agent_metadata: Dict[UUID, Dict[str, Any]] = {}
|
212 |
+
self.users: Dict[UUID, User] = {} # user_id -> User
|
213 |
+
self.api_keys: Dict[str, UUID] = {} # api_key -> user_id
|
214 |
+
self.user_agents: Dict[UUID, List[UUID]] = (
|
215 |
+
{}
|
216 |
+
) # user_id -> [agent_ids]
|
217 |
+
self.executor = ThreadPoolExecutor(max_workers=4)
|
218 |
+
self._ensure_directories()
|
219 |
+
|
220 |
+
def _ensure_directories(self):
|
221 |
+
"""Ensure required directories exist."""
|
222 |
+
Path("logs").mkdir(exist_ok=True)
|
223 |
+
Path("states").mkdir(exist_ok=True)
|
224 |
+
|
225 |
+
def create_api_key(self, user_id: UUID, key_name: str) -> APIKey:
|
226 |
+
"""Create a new API key for a user."""
|
227 |
+
if user_id not in self.users:
|
228 |
+
raise HTTPException(
|
229 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
230 |
+
detail="User not found",
|
231 |
+
)
|
232 |
+
|
233 |
+
# Generate a secure random API key
|
234 |
+
api_key = secrets.token_urlsafe(API_KEY_LENGTH)
|
235 |
+
|
236 |
+
# Create the API key object
|
237 |
+
key_object = APIKey(
|
238 |
+
key=api_key,
|
239 |
+
name=key_name,
|
240 |
+
created_at=datetime.utcnow(),
|
241 |
+
last_used=datetime.utcnow(),
|
242 |
+
)
|
243 |
+
|
244 |
+
# Store the API key
|
245 |
+
self.users[user_id].api_keys[api_key] = key_object
|
246 |
+
self.api_keys[api_key] = user_id
|
247 |
+
|
248 |
+
return key_object
|
249 |
+
|
250 |
+
async def verify_agent_access(
|
251 |
+
self, agent_id: UUID, user_id: UUID
|
252 |
+
) -> bool:
|
253 |
+
"""Verify if a user has access to an agent."""
|
254 |
+
if agent_id not in self.agents:
|
255 |
+
return False
|
256 |
+
return (
|
257 |
+
self.agent_metadata[agent_id]["owner_id"] == user_id
|
258 |
+
or self.users[user_id].is_admin
|
259 |
+
)
|
260 |
+
|
261 |
+
def validate_api_key(self, api_key: str) -> Optional[UUID]:
|
262 |
+
"""Validate an API key and return the associated user ID."""
|
263 |
+
user_id = self.api_keys.get(api_key)
|
264 |
+
if not user_id or api_key not in self.users[user_id].api_keys:
|
265 |
+
return None
|
266 |
+
|
267 |
+
key_object = self.users[user_id].api_keys[api_key]
|
268 |
+
if not key_object.is_active:
|
269 |
+
return None
|
270 |
+
|
271 |
+
# Update last used timestamp
|
272 |
+
key_object.last_used = datetime.utcnow()
|
273 |
+
return user_id
|
274 |
+
|
275 |
+
async def create_agent(
|
276 |
+
self, config: AgentConfig, user_id: UUID
|
277 |
+
) -> UUID:
|
278 |
+
"""Create a new agent with the given configuration."""
|
279 |
+
try:
|
280 |
+
|
281 |
+
agent = Agent(
|
282 |
+
agent_name=config.agent_name,
|
283 |
+
system_prompt=config.system_prompt,
|
284 |
+
model_name=config.model_name,
|
285 |
+
max_loops=config.max_loops,
|
286 |
+
autosave=config.autosave,
|
287 |
+
dashboard=config.dashboard,
|
288 |
+
verbose=config.verbose,
|
289 |
+
dynamic_temperature_enabled=True,
|
290 |
+
saved_state_path=f"states/{config.agent_name}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json",
|
291 |
+
user_name=config.user_name,
|
292 |
+
retry_attempts=config.retry_attempts,
|
293 |
+
context_length=config.context_length,
|
294 |
+
return_step_meta=True,
|
295 |
+
output_type="str",
|
296 |
+
streaming_on=config.streaming_on,
|
297 |
+
)
|
298 |
+
|
299 |
+
agent_id = uuid4()
|
300 |
+
self.agents[agent_id] = agent
|
301 |
+
self.agent_metadata[agent_id] = {
|
302 |
+
"description": config.description,
|
303 |
+
"created_at": datetime.utcnow(),
|
304 |
+
"last_used": datetime.utcnow(),
|
305 |
+
"total_completions": 0,
|
306 |
+
"tags": config.tags,
|
307 |
+
"total_tokens": 0,
|
308 |
+
"error_count": 0,
|
309 |
+
"response_times": [],
|
310 |
+
"status": AgentStatus.IDLE,
|
311 |
+
"start_time": datetime.utcnow(),
|
312 |
+
"downtime": timedelta(),
|
313 |
+
"successful_completions": 0,
|
314 |
+
}
|
315 |
+
|
316 |
+
# Add to user's agents list
|
317 |
+
if user_id not in self.user_agents:
|
318 |
+
self.user_agents[user_id] = []
|
319 |
+
self.user_agents[user_id].append(agent_id)
|
320 |
+
|
321 |
+
return agent_id
|
322 |
+
|
323 |
+
except Exception as e:
|
324 |
+
logger.error(f"Error creating agent: {str(e)}")
|
325 |
+
raise HTTPException(
|
326 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
327 |
+
detail=f"Failed to create agent: {str(e)}",
|
328 |
+
)
|
329 |
+
|
330 |
+
async def get_agent(self, agent_id: UUID) -> Agent:
|
331 |
+
"""Retrieve an agent by ID."""
|
332 |
+
agent = self.agents.get(agent_id)
|
333 |
+
if not agent:
|
334 |
+
logger.error(f"Agent not found: {agent_id}")
|
335 |
+
raise HTTPException(
|
336 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
337 |
+
detail=f"Agent {agent_id} not found",
|
338 |
+
)
|
339 |
+
return agent
|
340 |
+
|
341 |
+
async def update_agent(
|
342 |
+
self, agent_id: UUID, update: AgentUpdate
|
343 |
+
) -> None:
|
344 |
+
"""Update agent configuration."""
|
345 |
+
agent = await self.get_agent(agent_id)
|
346 |
+
metadata = self.agent_metadata[agent_id]
|
347 |
+
|
348 |
+
if update.system_prompt:
|
349 |
+
agent.system_prompt = update.system_prompt
|
350 |
+
if update.max_loops is not None:
|
351 |
+
agent.max_loops = update.max_loops
|
352 |
+
if update.tags is not None:
|
353 |
+
metadata["tags"] = update.tags
|
354 |
+
if update.description is not None:
|
355 |
+
metadata["description"] = update.description
|
356 |
+
if update.status is not None:
|
357 |
+
metadata["status"] = update.status
|
358 |
+
if update.status == AgentStatus.MAINTENANCE:
|
359 |
+
metadata["downtime"] += (
|
360 |
+
datetime.utcnow() - metadata["last_used"]
|
361 |
+
)
|
362 |
+
|
363 |
+
logger.info(f"Updated agent {agent_id}")
|
364 |
+
|
365 |
+
async def list_agents(
|
366 |
+
self,
|
367 |
+
tags: Optional[List[str]] = None,
|
368 |
+
status: Optional[AgentStatus] = None,
|
369 |
+
) -> List[AgentSummary]:
|
370 |
+
"""List all agents, optionally filtered by tags and status."""
|
371 |
+
summaries = []
|
372 |
+
for agent_id, agent in self.agents.items():
|
373 |
+
metadata = self.agent_metadata[agent_id]
|
374 |
+
|
375 |
+
# Apply filters
|
376 |
+
if tags and not any(
|
377 |
+
tag in metadata["tags"] for tag in tags
|
378 |
+
):
|
379 |
+
continue
|
380 |
+
if status and metadata["status"] != status:
|
381 |
+
continue
|
382 |
+
|
383 |
+
summaries.append(
|
384 |
+
AgentSummary(
|
385 |
+
agent_id=agent_id,
|
386 |
+
agent_name=agent.agent_name,
|
387 |
+
description=metadata["description"],
|
388 |
+
created_at=metadata["created_at"],
|
389 |
+
last_used=metadata["last_used"],
|
390 |
+
total_completions=metadata["total_completions"],
|
391 |
+
tags=metadata["tags"],
|
392 |
+
status=metadata["status"],
|
393 |
+
)
|
394 |
+
)
|
395 |
+
return summaries
|
396 |
+
|
397 |
+
async def get_agent_metrics(self, agent_id: UUID) -> AgentMetrics:
|
398 |
+
"""Get performance metrics for an agent."""
|
399 |
+
metadata = self.agent_metadata[agent_id]
|
400 |
+
response_times = metadata["response_times"]
|
401 |
+
|
402 |
+
# Calculate metrics
|
403 |
+
total_time = datetime.utcnow() - metadata["start_time"]
|
404 |
+
uptime = total_time - metadata["downtime"]
|
405 |
+
uptime_percentage = (
|
406 |
+
uptime.total_seconds() / total_time.total_seconds()
|
407 |
+
) * 100
|
408 |
+
|
409 |
+
success_rate = (
|
410 |
+
metadata["successful_completions"]
|
411 |
+
/ metadata["total_completions"]
|
412 |
+
* 100
|
413 |
+
if metadata["total_completions"] > 0
|
414 |
+
else 0
|
415 |
+
)
|
416 |
+
|
417 |
+
return AgentMetrics(
|
418 |
+
total_completions=metadata["total_completions"],
|
419 |
+
average_response_time=(
|
420 |
+
sum(response_times) / len(response_times)
|
421 |
+
if response_times
|
422 |
+
else 0
|
423 |
+
),
|
424 |
+
error_rate=(
|
425 |
+
metadata["error_count"]
|
426 |
+
/ metadata["total_completions"]
|
427 |
+
if metadata["total_completions"] > 0
|
428 |
+
else 0
|
429 |
+
),
|
430 |
+
last_24h_completions=sum(
|
431 |
+
1
|
432 |
+
for t in response_times
|
433 |
+
if (datetime.utcnow() - t).days < 1
|
434 |
+
),
|
435 |
+
total_tokens_used=metadata["total_tokens"],
|
436 |
+
uptime_percentage=uptime_percentage,
|
437 |
+
success_rate=success_rate,
|
438 |
+
peak_tokens_per_minute=max(
|
439 |
+
metadata.get("tokens_per_minute", [0])
|
440 |
+
),
|
441 |
+
)
|
442 |
+
|
443 |
+
async def clone_agent(
|
444 |
+
self, agent_id: UUID, new_name: str
|
445 |
+
) -> UUID:
|
446 |
+
"""Clone an existing agent with a new name."""
|
447 |
+
original_agent = await self.get_agent(agent_id)
|
448 |
+
original_metadata = self.agent_metadata[agent_id]
|
449 |
+
|
450 |
+
config = AgentConfig(
|
451 |
+
agent_name=new_name,
|
452 |
+
description=f"Clone of {original_agent.agent_name}",
|
453 |
+
system_prompt=original_agent.system_prompt,
|
454 |
+
model_name=original_agent.model_name,
|
455 |
+
temperature=0.5,
|
456 |
+
max_loops=original_agent.max_loops,
|
457 |
+
tags=original_metadata["tags"],
|
458 |
+
)
|
459 |
+
|
460 |
+
return await self.create_agent(config)
|
461 |
+
|
462 |
+
async def delete_agent(self, agent_id: UUID) -> None:
|
463 |
+
"""Delete an agent."""
|
464 |
+
if agent_id not in self.agents:
|
465 |
+
raise HTTPException(
|
466 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
467 |
+
detail=f"Agent {agent_id} not found",
|
468 |
+
)
|
469 |
+
|
470 |
+
# Clean up any resources
|
471 |
+
agent = self.agents[agent_id]
|
472 |
+
if agent.autosave and os.path.exists(agent.saved_state_path):
|
473 |
+
os.remove(agent.saved_state_path)
|
474 |
+
|
475 |
+
del self.agents[agent_id]
|
476 |
+
del self.agent_metadata[agent_id]
|
477 |
+
logger.info(f"Deleted agent {agent_id}")
|
478 |
+
|
479 |
+
async def process_completion(
|
480 |
+
self,
|
481 |
+
agent: Agent,
|
482 |
+
prompt: str,
|
483 |
+
agent_id: UUID,
|
484 |
+
max_tokens: Optional[int] = None,
|
485 |
+
temperature_override: Optional[float] = None,
|
486 |
+
) -> CompletionResponse:
|
487 |
+
"""Process a completion request using the specified agent."""
|
488 |
+
# TELEMETRY CHANGE 6: Initialize tracer for this module
|
489 |
+
tracer = trace.get_tracer(__name__)
|
490 |
+
# TELEMETRY CHANGE 7: Create parent span for entire completion process
|
491 |
+
with tracer.start_as_current_span("process_completion") as span:
|
492 |
+
# TELEMETRY CHANGE 8: Add context attributes
|
493 |
+
span.set_attribute("agent.id", str(agent_id))
|
494 |
+
span.set_attribute("agent.name", agent.agent_name)
|
495 |
+
span.set_attribute("prompt.length", len(prompt))
|
496 |
+
if max_tokens:
|
497 |
+
span.set_attribute("max_tokens", max_tokens)
|
498 |
+
|
499 |
+
start_time = datetime.utcnow()
|
500 |
+
metadata = self.agent_metadata[agent_id]
|
501 |
+
|
502 |
+
try:
|
503 |
+
with tracer.start_span("update_agent_status") as status_span:
|
504 |
+
metadata["status"] = AgentStatus.PROCESSING
|
505 |
+
metadata["last_used"] = start_time
|
506 |
+
status_span.set_attribute("agent.status", AgentStatus.PROCESSING.value)
|
507 |
+
|
508 |
+
with tracer.start_span("process_agent_completion") as completion_span:
|
509 |
+
response = agent.run(prompt)
|
510 |
+
|
511 |
+
completion_span.set_attribute("completion.success", True)
|
512 |
+
|
513 |
+
with tracer.start_span("update_metrics") as metrics_span:
|
514 |
+
processing_time = (datetime.utcnow() - start_time).total_seconds()
|
515 |
+
metadata["response_times"].append(processing_time)
|
516 |
+
metadata["total_completions"] += 1
|
517 |
+
metadata["successful_completions"] += 1
|
518 |
+
|
519 |
+
prompt_tokens = len(prompt.split()) * 1.3
|
520 |
+
completion_tokens = len(response.split()) * 1.3
|
521 |
+
total_tokens = int(prompt_tokens + completion_tokens)
|
522 |
+
metadata["total_tokens"] += total_tokens
|
523 |
+
|
524 |
+
metrics_span.set_attribute("processing.time", processing_time)
|
525 |
+
metrics_span.set_attribute("tokens.total", total_tokens)
|
526 |
+
metrics_span.set_attribute("tokens.prompt", int(prompt_tokens))
|
527 |
+
metrics_span.set_attribute("tokens.completion", int(completion_tokens))
|
528 |
+
|
529 |
+
with tracer.start_span("update_token_tracking") as token_span:
|
530 |
+
current_minute = datetime.utcnow().replace(second=0, microsecond=0)
|
531 |
+
if "tokens_per_minute" not in metadata:
|
532 |
+
metadata["tokens_per_minute"] = {}
|
533 |
+
metadata["tokens_per_minute"][current_minute] = (
|
534 |
+
metadata["tokens_per_minute"].get(current_minute, 0) + total_tokens
|
535 |
+
)
|
536 |
+
token_span.set_attribute("tokens.per_minute",
|
537 |
+
metadata["tokens_per_minute"][current_minute])
|
538 |
+
|
539 |
+
completion_response = CompletionResponse(
|
540 |
+
agent_id=agent_id,
|
541 |
+
response=response,
|
542 |
+
metadata={
|
543 |
+
"agent_name": agent.agent_name,
|
544 |
+
},
|
545 |
+
timestamp=datetime.utcnow(),
|
546 |
+
processing_time=processing_time,
|
547 |
+
token_usage={
|
548 |
+
"prompt_tokens": int(prompt_tokens),
|
549 |
+
"completion_tokens": int(completion_tokens),
|
550 |
+
"total_tokens": total_tokens,
|
551 |
+
},
|
552 |
+
)
|
553 |
+
# TELEMETRY CHANGE 10: Detailed error tracking
|
554 |
+
span.set_attribute("completion.status", "success")
|
555 |
+
return completion_response
|
556 |
+
|
557 |
+
except Exception as e:
|
558 |
+
metadata["error_count"] += 1
|
559 |
+
metadata["status"] = AgentStatus.ERROR
|
560 |
+
# TELEMETRY CHANGE 11: Detailed error recording
|
561 |
+
span.set_attribute("completion.status", "error")
|
562 |
+
span.set_attribute("error.type", e.__class__.__name__)
|
563 |
+
span.set_attribute("error.message", str(e))
|
564 |
+
span.record_exception(e)
|
565 |
+
|
566 |
+
logger.error(
|
567 |
+
f"Error in completion processing: {str(e)}\n{traceback.format_exc()}"
|
568 |
+
)
|
569 |
+
raise HTTPException(
|
570 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
571 |
+
detail=f"Error processing completion: {str(e)}",
|
572 |
+
)
|
573 |
+
finally:
|
574 |
+
metadata["status"] = AgentStatus.IDLE
|
575 |
+
span.set_attribute("agent.final_status", AgentStatus.IDLE.value)
|
576 |
+
|
577 |
+
|
578 |
+
class StoreManager:
|
579 |
+
_instance = None
|
580 |
+
|
581 |
+
@classmethod
|
582 |
+
def get_instance(cls) -> "AgentStore":
|
583 |
+
if cls._instance is None:
|
584 |
+
cls._instance = AgentStore()
|
585 |
+
return cls._instance
|
586 |
+
|
587 |
+
|
588 |
+
# Modify the dependency function
|
589 |
+
def get_store() -> AgentStore:
|
590 |
+
"""Dependency to get the AgentStore instance."""
|
591 |
+
return StoreManager.get_instance()
|
592 |
+
|
593 |
+
|
594 |
+
# Security utility function using the new dependency
|
595 |
+
async def get_current_user(
|
596 |
+
api_key: str = Header(
|
597 |
+
..., description="API key for authentication"
|
598 |
+
),
|
599 |
+
store: AgentStore = Depends(get_store),
|
600 |
+
) -> User:
|
601 |
+
"""Validate API key and return current user."""
|
602 |
+
user_id = store.validate_api_key(api_key)
|
603 |
+
if not user_id:
|
604 |
+
raise HTTPException(
|
605 |
+
status_code=status.HTTP_401_UNAUTHORIZED,
|
606 |
+
detail="Invalid or expired API key",
|
607 |
+
headers={"WWW-Authenticate": "ApiKey"},
|
608 |
+
)
|
609 |
+
return store.users[user_id]
|
610 |
+
|
611 |
+
|
612 |
+
class SwarmsAPI:
|
613 |
+
"""Enhanced API class for Swarms agent integration."""
|
614 |
+
|
615 |
+
def __init__(self):
|
616 |
+
self.app = FastAPI(
|
617 |
+
title="Swarms Agent API",
|
618 |
+
description="Production-grade API for Swarms agent interaction",
|
619 |
+
version="1.0.0",
|
620 |
+
docs_url="/v1/docs",
|
621 |
+
redoc_url="/v1/redoc",
|
622 |
+
)
|
623 |
+
# Initialize the store using the singleton manager
|
624 |
+
self.store = StoreManager.get_instance()
|
625 |
+
|
626 |
+
# Configure CORS
|
627 |
+
self.app.add_middleware(
|
628 |
+
CORSMiddleware,
|
629 |
+
allow_origins=[
|
630 |
+
"*"
|
631 |
+
], # Configure appropriately for production
|
632 |
+
allow_credentials=True,
|
633 |
+
allow_methods=["*"],
|
634 |
+
allow_headers=["*"],
|
635 |
+
)
|
636 |
+
|
637 |
+
self._setup_routes()
|
638 |
+
|
639 |
+
def _setup_routes(self):
|
640 |
+
"""Set up API routes."""
|
641 |
+
|
642 |
+
# In your API code
|
643 |
+
@self.app.post("/v1/users", response_model=Dict[str, Any])
|
644 |
+
async def create_user(request: Request):
|
645 |
+
"""Create a new user and initial API key."""
|
646 |
+
try:
|
647 |
+
body = await request.json()
|
648 |
+
username = body.get("username")
|
649 |
+
if not username or len(username) < 3:
|
650 |
+
raise HTTPException(
|
651 |
+
status_code=400, detail="Invalid username"
|
652 |
+
)
|
653 |
+
|
654 |
+
user_id = uuid4()
|
655 |
+
user = User(id=user_id, username=username)
|
656 |
+
self.store.users[user_id] = user
|
657 |
+
initial_key = self.store.create_api_key(
|
658 |
+
user_id, "Initial Key"
|
659 |
+
)
|
660 |
+
return {
|
661 |
+
"user_id": user_id,
|
662 |
+
"api_key": initial_key.key,
|
663 |
+
}
|
664 |
+
except Exception as e:
|
665 |
+
logger.error(f"Error creating user: {str(e)}")
|
666 |
+
raise HTTPException(status_code=400, detail=str(e))
|
667 |
+
|
668 |
+
@self.app.post(
|
669 |
+
"/v1/users/{user_id}/api-keys", response_model=APIKey
|
670 |
+
)
|
671 |
+
async def create_api_key(
|
672 |
+
user_id: UUID,
|
673 |
+
key_create: APIKeyCreate,
|
674 |
+
current_user: User = Depends(get_current_user),
|
675 |
+
):
|
676 |
+
"""Create a new API key for a user."""
|
677 |
+
if (
|
678 |
+
current_user.id != user_id
|
679 |
+
and not current_user.is_admin
|
680 |
+
):
|
681 |
+
raise HTTPException(
|
682 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
683 |
+
detail="Not authorized to create API keys for this user",
|
684 |
+
)
|
685 |
+
|
686 |
+
return self.store.create_api_key(user_id, key_create.name)
|
687 |
+
|
688 |
+
@self.app.get(
|
689 |
+
"/v1/users/{user_id}/api-keys",
|
690 |
+
response_model=List[APIKey],
|
691 |
+
)
|
692 |
+
async def list_api_keys(
|
693 |
+
user_id: UUID,
|
694 |
+
current_user: User = Depends(get_current_user),
|
695 |
+
):
|
696 |
+
"""List all API keys for a user."""
|
697 |
+
if (
|
698 |
+
current_user.id != user_id
|
699 |
+
and not current_user.is_admin
|
700 |
+
):
|
701 |
+
raise HTTPException(
|
702 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
703 |
+
detail="Not authorized to view API keys for this user",
|
704 |
+
)
|
705 |
+
|
706 |
+
return list(self.store.users[user_id].api_keys.values())
|
707 |
+
|
708 |
+
@self.app.delete("/v1/users/{user_id}/api-keys/{key}")
|
709 |
+
async def revoke_api_key(
|
710 |
+
user_id: UUID,
|
711 |
+
key: str,
|
712 |
+
current_user: User = Depends(get_current_user),
|
713 |
+
):
|
714 |
+
"""Revoke an API key."""
|
715 |
+
if (
|
716 |
+
current_user.id != user_id
|
717 |
+
and not current_user.is_admin
|
718 |
+
):
|
719 |
+
raise HTTPException(
|
720 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
721 |
+
detail="Not authorized to revoke API keys for this user",
|
722 |
+
)
|
723 |
+
|
724 |
+
if key in self.store.users[user_id].api_keys:
|
725 |
+
self.store.users[user_id].api_keys[
|
726 |
+
key
|
727 |
+
].is_active = False
|
728 |
+
del self.store.api_keys[key]
|
729 |
+
return {"status": "API key revoked"}
|
730 |
+
|
731 |
+
raise HTTPException(
|
732 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
733 |
+
detail="API key not found",
|
734 |
+
)
|
735 |
+
|
736 |
+
@self.app.get(
|
737 |
+
"/v1/users/me/agents", response_model=List[AgentSummary]
|
738 |
+
)
|
739 |
+
async def list_user_agents(
|
740 |
+
current_user: User = Depends(get_current_user),
|
741 |
+
tags: Optional[List[str]] = Query(None),
|
742 |
+
status: Optional[AgentStatus] = None,
|
743 |
+
):
|
744 |
+
"""List all agents owned by the current user."""
|
745 |
+
user_agents = self.store.user_agents.get(
|
746 |
+
current_user.id, []
|
747 |
+
)
|
748 |
+
return [
|
749 |
+
agent
|
750 |
+
for agent in await self.store.list_agents(
|
751 |
+
tags, status
|
752 |
+
)
|
753 |
+
if agent.agent_id in user_agents
|
754 |
+
]
|
755 |
+
|
756 |
+
# Modify existing routes to use API key authentication
|
757 |
+
@self.app.post("/v1/agent", response_model=Dict[str, UUID])
|
758 |
+
async def create_agent(
|
759 |
+
config: AgentConfig,
|
760 |
+
current_user: User = Depends(get_current_user),
|
761 |
+
):
|
762 |
+
"""Create a new agent with the specified configuration."""
|
763 |
+
agent_id = await self.store.create_agent(
|
764 |
+
config, current_user.id
|
765 |
+
)
|
766 |
+
return {"agent_id": agent_id}
|
767 |
+
|
768 |
+
@self.app.get("/v1/agents", response_model=List[AgentSummary])
|
769 |
+
async def list_agents(
|
770 |
+
tags: Optional[List[str]] = Query(None),
|
771 |
+
status: Optional[AgentStatus] = None,
|
772 |
+
):
|
773 |
+
"""List all agents, optionally filtered by tags and status."""
|
774 |
+
return await self.store.list_agents(tags, status)
|
775 |
+
|
776 |
+
@self.app.patch(
|
777 |
+
"/v1/agent/{agent_id}", response_model=Dict[str, str]
|
778 |
+
)
|
779 |
+
async def update_agent(agent_id: UUID, update: AgentUpdate):
|
780 |
+
"""Update an existing agent's configuration."""
|
781 |
+
await self.store.update_agent(agent_id, update)
|
782 |
+
return {"status": "updated"}
|
783 |
+
|
784 |
+
@self.app.get(
|
785 |
+
"/v1/agent/{agent_id}/metrics",
|
786 |
+
response_model=AgentMetrics,
|
787 |
+
)
|
788 |
+
async def get_agent_metrics(agent_id: UUID):
|
789 |
+
"""Get performance metrics for a specific agent."""
|
790 |
+
return await self.store.get_agent_metrics(agent_id)
|
791 |
+
|
792 |
+
@self.app.post(
|
793 |
+
"/v1/agent/{agent_id}/clone",
|
794 |
+
response_model=Dict[str, UUID],
|
795 |
+
)
|
796 |
+
async def clone_agent(agent_id: UUID, new_name: str):
|
797 |
+
"""Clone an existing agent with a new name."""
|
798 |
+
new_id = await self.store.clone_agent(agent_id, new_name)
|
799 |
+
return {"agent_id": new_id}
|
800 |
+
|
801 |
+
@self.app.delete("/v1/agent/{agent_id}")
|
802 |
+
async def delete_agent(agent_id: UUID):
|
803 |
+
"""Delete an agent."""
|
804 |
+
await self.store.delete_agent(agent_id)
|
805 |
+
return {"status": "deleted"}
|
806 |
+
|
807 |
+
@self.app.post(
|
808 |
+
"/v1/agent/completions", response_model=CompletionResponse
|
809 |
+
)
|
810 |
+
async def create_completion(
|
811 |
+
request: CompletionRequest,
|
812 |
+
background_tasks: BackgroundTasks,
|
813 |
+
):
|
814 |
+
"""Process a completion request with the specified agent."""
|
815 |
+
try:
|
816 |
+
agent = await self.store.get_agent(request.agent_id)
|
817 |
+
|
818 |
+
# Process completion
|
819 |
+
response = await self.store.process_completion(
|
820 |
+
agent,
|
821 |
+
request.prompt,
|
822 |
+
request.agent_id,
|
823 |
+
request.max_tokens,
|
824 |
+
0.5,
|
825 |
+
)
|
826 |
+
|
827 |
+
# Schedule background cleanup
|
828 |
+
background_tasks.add_task(
|
829 |
+
self._cleanup_old_metrics, request.agent_id
|
830 |
+
)
|
831 |
+
|
832 |
+
return response
|
833 |
+
|
834 |
+
except Exception as e:
|
835 |
+
logger.error(f"Error processing completion: {str(e)}")
|
836 |
+
raise HTTPException(
|
837 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
838 |
+
detail=f"Error processing completion: {str(e)}",
|
839 |
+
)
|
840 |
+
|
841 |
+
@self.app.get("/v1/agent/{agent_id}/status")
|
842 |
+
async def get_agent_status(agent_id: UUID):
|
843 |
+
"""Get the current status of an agent."""
|
844 |
+
metadata = self.store.agent_metadata.get(agent_id)
|
845 |
+
if not metadata:
|
846 |
+
raise HTTPException(
|
847 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
848 |
+
detail=f"Agent {agent_id} not found",
|
849 |
+
)
|
850 |
+
return {
|
851 |
+
"agent_id": agent_id,
|
852 |
+
"status": metadata["status"],
|
853 |
+
"last_used": metadata["last_used"],
|
854 |
+
"total_completions": metadata["total_completions"],
|
855 |
+
"error_count": metadata["error_count"],
|
856 |
+
}
|
857 |
+
|
858 |
+
async def _cleanup_old_metrics(self, agent_id: UUID):
|
859 |
+
"""Clean up old metrics data to prevent memory bloat."""
|
860 |
+
metadata = self.store.agent_metadata.get(agent_id)
|
861 |
+
if metadata:
|
862 |
+
# Keep only last 24 hours of response times
|
863 |
+
cutoff = datetime.utcnow() - timedelta(days=1)
|
864 |
+
metadata["response_times"] = [
|
865 |
+
t
|
866 |
+
for t in metadata["response_times"]
|
867 |
+
if isinstance(t, (int, float))
|
868 |
+
and t > cutoff.timestamp()
|
869 |
+
]
|
870 |
+
|
871 |
+
# Clean up old tokens per minute data
|
872 |
+
if "tokens_per_minute" in metadata:
|
873 |
+
metadata["tokens_per_minute"] = {
|
874 |
+
k: v
|
875 |
+
for k, v in metadata["tokens_per_minute"].items()
|
876 |
+
if k > cutoff
|
877 |
+
}
|
878 |
+
|
879 |
+
@app.middleware("http")
|
880 |
+
async def add_trace_context(request: Request, call_next):
|
881 |
+
span = trace.get_current_span()
|
882 |
+
span.set_attribute("http.url", str(request.url))
|
883 |
+
span.set_attribute("http.method", request.method)
|
884 |
+
response = await call_next(request)
|
885 |
+
span.set_attribute("http.status_code", response.status_code)
|
886 |
+
return response
|
887 |
+
|
888 |
+
|
889 |
+
|
890 |
+
def create_app() -> FastAPI:
|
891 |
+
"""Create and configure the FastAPI application."""
|
892 |
+
logger.info("Creating FastAPI application")
|
893 |
+
|
894 |
+
# TELEMETRY CHANGE 1: Configure OpenTelemetry resource with service name
|
895 |
+
resource = Resource.create({"service.name": "swarms-api"})
|
896 |
+
trace.set_tracer_provider(TracerProvider(resource=resource))
|
897 |
+
|
898 |
+
# TELEMETRY CHANGE 2: Set up OTLP exporter for AWS
|
899 |
+
otlp_exporter = OTLPSpanExporter(
|
900 |
+
endpoint="http://aws-otel-collector:4317", # AWS OpenTelemetry Collector endpoint
|
901 |
+
insecure=True
|
902 |
+
)
|
903 |
+
|
904 |
+
# TELEMETRY CHANGE 3: Configure batch processing of spans
|
905 |
+
span_processor = BatchSpanProcessor(otlp_exporter)
|
906 |
+
trace.get_tracer_provider().add_span_processor(span_processor)
|
907 |
+
|
908 |
+
api = SwarmsAPI()
|
909 |
+
app = api.app
|
910 |
+
|
911 |
+
|
912 |
+
# TELEMETRY CHANGE 4: Instrument FastAPI framework
|
913 |
+
FastAPIInstrumentor.instrument_app(app)
|
914 |
+
|
915 |
+
# TELEMETRY CHANGE 5: Instrument HTTP client library
|
916 |
+
RequestsInstrumentor().instrument()
|
917 |
+
|
918 |
+
logger.info("FastAPI application created successfully")
|
919 |
+
return app
|
920 |
+
|
921 |
+
app = create_app()
|
922 |
+
|
923 |
+
if __name__ == "__main__":
|
924 |
+
try:
|
925 |
+
logger.info("Starting API server...")
|
926 |
+
print("Starting API server on http://0.0.0.0:8000")
|
927 |
+
|
928 |
+
uvicorn.run(
|
929 |
+
app, # Pass the app instance directly
|
930 |
+
host="0.0.0.0",
|
931 |
+
port=8000,
|
932 |
+
log_level="info",
|
933 |
+
)
|
934 |
+
except Exception as e:
|
935 |
+
logger.error(f"Failed to start API: {str(e)}")
|
936 |
+
print(f"Error starting server: {str(e)}")
|
api/api_test.py
ADDED
@@ -0,0 +1,254 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from typing import Dict, Optional, Any
|
3 |
+
from dataclasses import dataclass
|
4 |
+
import pytest
|
5 |
+
import requests
|
6 |
+
from uuid import UUID
|
7 |
+
from pydantic import BaseModel
|
8 |
+
from _pytest.terminal import TerminalReporter
|
9 |
+
|
10 |
+
|
11 |
+
# Configuration
|
12 |
+
@dataclass
|
13 |
+
class TestConfig:
|
14 |
+
"""Test configuration settings"""
|
15 |
+
|
16 |
+
base_url: str
|
17 |
+
timeout: int = 30
|
18 |
+
verify_ssl: bool = True
|
19 |
+
|
20 |
+
|
21 |
+
# Load config from environment or use defaults
|
22 |
+
config = TestConfig(
|
23 |
+
base_url=os.getenv("API_BASE_URL", "http://localhost:8000/v1")
|
24 |
+
)
|
25 |
+
|
26 |
+
|
27 |
+
# API Response Types
|
28 |
+
class UserResponse(BaseModel):
|
29 |
+
user_id: str
|
30 |
+
api_key: str
|
31 |
+
|
32 |
+
|
33 |
+
class AgentResponse(BaseModel):
|
34 |
+
agent_id: UUID
|
35 |
+
|
36 |
+
|
37 |
+
class MetricsResponse(BaseModel):
|
38 |
+
total_completions: int
|
39 |
+
average_response_time: float
|
40 |
+
error_rate: float
|
41 |
+
last_24h_completions: int
|
42 |
+
total_tokens_used: int
|
43 |
+
uptime_percentage: float
|
44 |
+
success_rate: float
|
45 |
+
peak_tokens_per_minute: int
|
46 |
+
|
47 |
+
|
48 |
+
class APIClient:
|
49 |
+
"""API Client with typed methods"""
|
50 |
+
|
51 |
+
def __init__(self, config: TestConfig):
|
52 |
+
self.config = config
|
53 |
+
self.session = requests.Session()
|
54 |
+
|
55 |
+
def _url(self, path: str) -> str:
|
56 |
+
"""Construct full URL"""
|
57 |
+
return f"{self.config.base_url}/{path.lstrip('/')}"
|
58 |
+
|
59 |
+
def _request(
|
60 |
+
self,
|
61 |
+
method: str,
|
62 |
+
path: str,
|
63 |
+
headers: Optional[Dict] = None,
|
64 |
+
**kwargs: Any,
|
65 |
+
) -> requests.Response:
|
66 |
+
"""Make HTTP request with config defaults"""
|
67 |
+
url = self._url(path)
|
68 |
+
return self.session.request(
|
69 |
+
method=method,
|
70 |
+
url=url,
|
71 |
+
headers=headers,
|
72 |
+
timeout=self.config.timeout,
|
73 |
+
verify=self.config.verify_ssl,
|
74 |
+
**kwargs,
|
75 |
+
)
|
76 |
+
|
77 |
+
def create_user(self, username: str) -> UserResponse:
|
78 |
+
"""Create a new user"""
|
79 |
+
response = self._request(
|
80 |
+
"POST", "/users", json={"username": username}
|
81 |
+
)
|
82 |
+
response.raise_for_status()
|
83 |
+
return UserResponse(**response.json())
|
84 |
+
|
85 |
+
def create_agent(
|
86 |
+
self, agent_config: Dict[str, Any], api_key: str
|
87 |
+
) -> AgentResponse:
|
88 |
+
"""Create a new agent"""
|
89 |
+
headers = {"api-key": api_key}
|
90 |
+
response = self._request(
|
91 |
+
"POST", "/agent", headers=headers, json=agent_config
|
92 |
+
)
|
93 |
+
response.raise_for_status()
|
94 |
+
return AgentResponse(**response.json())
|
95 |
+
|
96 |
+
def get_metrics(
|
97 |
+
self, agent_id: UUID, api_key: str
|
98 |
+
) -> MetricsResponse:
|
99 |
+
"""Get agent metrics"""
|
100 |
+
headers = {"api-key": api_key}
|
101 |
+
response = self._request(
|
102 |
+
"GET", f"/agent/{agent_id}/metrics", headers=headers
|
103 |
+
)
|
104 |
+
response.raise_for_status()
|
105 |
+
return MetricsResponse(**response.json())
|
106 |
+
|
107 |
+
|
108 |
+
# Test Fixtures
|
109 |
+
@pytest.fixture
|
110 |
+
def api_client() -> APIClient:
|
111 |
+
"""Fixture for API client"""
|
112 |
+
return APIClient(config)
|
113 |
+
|
114 |
+
|
115 |
+
@pytest.fixture
|
116 |
+
def test_user(api_client: APIClient) -> UserResponse:
|
117 |
+
"""Fixture for test user"""
|
118 |
+
return api_client.create_user("test_user")
|
119 |
+
|
120 |
+
|
121 |
+
@pytest.fixture
|
122 |
+
def test_agent(
|
123 |
+
api_client: APIClient, test_user: UserResponse
|
124 |
+
) -> AgentResponse:
|
125 |
+
"""Fixture for test agent"""
|
126 |
+
agent_config = {
|
127 |
+
"agent_name": "test_agent",
|
128 |
+
"model_name": "gpt-4",
|
129 |
+
"system_prompt": "You are a test agent",
|
130 |
+
"description": "Test agent description",
|
131 |
+
}
|
132 |
+
return api_client.create_agent(agent_config, test_user.api_key)
|
133 |
+
|
134 |
+
|
135 |
+
# Tests
|
136 |
+
def test_user_creation(api_client: APIClient):
|
137 |
+
"""Test user creation flow"""
|
138 |
+
response = api_client.create_user("new_test_user")
|
139 |
+
assert response.user_id
|
140 |
+
assert response.api_key
|
141 |
+
|
142 |
+
|
143 |
+
def test_agent_creation(
|
144 |
+
api_client: APIClient, test_user: UserResponse
|
145 |
+
):
|
146 |
+
"""Test agent creation flow"""
|
147 |
+
agent_config = {
|
148 |
+
"agent_name": "test_agent",
|
149 |
+
"model_name": "gpt-4",
|
150 |
+
"system_prompt": "You are a test agent",
|
151 |
+
"description": "Test agent description",
|
152 |
+
}
|
153 |
+
response = api_client.create_agent(
|
154 |
+
agent_config, test_user.api_key
|
155 |
+
)
|
156 |
+
assert response.agent_id
|
157 |
+
|
158 |
+
|
159 |
+
def test_agent_metrics(
|
160 |
+
api_client: APIClient,
|
161 |
+
test_user: UserResponse,
|
162 |
+
test_agent: AgentResponse,
|
163 |
+
):
|
164 |
+
"""Test metrics retrieval"""
|
165 |
+
metrics = api_client.get_metrics(
|
166 |
+
test_agent.agent_id, test_user.api_key
|
167 |
+
)
|
168 |
+
assert metrics.total_completions >= 0
|
169 |
+
assert metrics.error_rate >= 0
|
170 |
+
assert metrics.uptime_percentage >= 0
|
171 |
+
|
172 |
+
|
173 |
+
def test_invalid_auth(api_client: APIClient):
|
174 |
+
"""Test invalid authentication"""
|
175 |
+
with pytest.raises(requests.exceptions.HTTPError) as exc_info:
|
176 |
+
api_client.create_agent({}, "invalid_key")
|
177 |
+
assert exc_info.value.response.status_code == 401
|
178 |
+
|
179 |
+
|
180 |
+
# Custom pytest plugin to capture test results
|
181 |
+
class ResultCapture:
|
182 |
+
def __init__(self):
|
183 |
+
self.total = 0
|
184 |
+
self.passed = 0
|
185 |
+
self.failed = 0
|
186 |
+
self.errors = 0
|
187 |
+
|
188 |
+
|
189 |
+
@pytest.hookimpl(hookwrapper=True)
|
190 |
+
def pytest_terminal_summary(
|
191 |
+
terminalreporter: TerminalReporter, exitstatus: int
|
192 |
+
):
|
193 |
+
yield
|
194 |
+
capture = getattr(
|
195 |
+
terminalreporter.config, "_result_capture", None
|
196 |
+
)
|
197 |
+
if capture:
|
198 |
+
capture.total = (
|
199 |
+
len(terminalreporter.stats.get("passed", []))
|
200 |
+
+ len(terminalreporter.stats.get("failed", []))
|
201 |
+
+ len(terminalreporter.stats.get("error", []))
|
202 |
+
)
|
203 |
+
capture.passed = len(terminalreporter.stats.get("passed", []))
|
204 |
+
capture.failed = len(terminalreporter.stats.get("failed", []))
|
205 |
+
capture.errors = len(terminalreporter.stats.get("error", []))
|
206 |
+
|
207 |
+
|
208 |
+
@dataclass
|
209 |
+
class TestReport:
|
210 |
+
total_tests: int
|
211 |
+
passed: int
|
212 |
+
failed: int
|
213 |
+
errors: int
|
214 |
+
|
215 |
+
@property
|
216 |
+
def success_rate(self) -> float:
|
217 |
+
return (
|
218 |
+
(self.passed / self.total_tests) * 100
|
219 |
+
if self.total_tests > 0
|
220 |
+
else 0
|
221 |
+
)
|
222 |
+
|
223 |
+
|
224 |
+
def run_tests() -> TestReport:
|
225 |
+
"""Run tests and generate typed report"""
|
226 |
+
# Create result capture
|
227 |
+
capture = ResultCapture()
|
228 |
+
|
229 |
+
# Create pytest configuration
|
230 |
+
args = [__file__, "-v"]
|
231 |
+
|
232 |
+
# Run pytest with our plugin
|
233 |
+
pytest.main(args, plugins=[capture])
|
234 |
+
|
235 |
+
# Generate report
|
236 |
+
return TestReport(
|
237 |
+
total_tests=capture.total,
|
238 |
+
passed=capture.passed,
|
239 |
+
failed=capture.failed,
|
240 |
+
errors=capture.errors,
|
241 |
+
)
|
242 |
+
|
243 |
+
|
244 |
+
if __name__ == "__main__":
|
245 |
+
# Example usage with environment variable
|
246 |
+
# export API_BASE_URL=http://api.example.com/v1
|
247 |
+
|
248 |
+
report = run_tests()
|
249 |
+
print("\nTest Results:")
|
250 |
+
print(f"Total Tests: {report.total_tests}")
|
251 |
+
print(f"Passed: {report.passed}")
|
252 |
+
print(f"Failed: {report.failed}")
|
253 |
+
print(f"Errors: {report.errors}")
|
254 |
+
print(f"Success Rate: {report.success_rate:.2f}%")
|
api/api_tests.py
ADDED
@@ -0,0 +1,472 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import asyncio
|
2 |
+
import json
|
3 |
+
from datetime import datetime
|
4 |
+
from typing import Any, Dict, List, Optional
|
5 |
+
from uuid import UUID
|
6 |
+
|
7 |
+
import httpx
|
8 |
+
from loguru import logger
|
9 |
+
|
10 |
+
# Configure logger
|
11 |
+
logger.add(
|
12 |
+
"tests/api_test_{time}.log",
|
13 |
+
rotation="1 day",
|
14 |
+
retention="7 days",
|
15 |
+
level="DEBUG",
|
16 |
+
format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {message}",
|
17 |
+
)
|
18 |
+
|
19 |
+
|
20 |
+
class TestConfig:
|
21 |
+
"""Test configuration and utilities"""
|
22 |
+
|
23 |
+
BASE_URL: str = "http://localhost:8000/v1"
|
24 |
+
TEST_USERNAME: str = "test_user"
|
25 |
+
api_key: Optional[str] = None
|
26 |
+
user_id: Optional[UUID] = None
|
27 |
+
test_agent_id: Optional[UUID] = None
|
28 |
+
|
29 |
+
|
30 |
+
class TestResult:
|
31 |
+
"""Model for test results"""
|
32 |
+
|
33 |
+
def __init__(
|
34 |
+
self,
|
35 |
+
test_name: str,
|
36 |
+
status: str,
|
37 |
+
duration: float,
|
38 |
+
error: Optional[str] = None,
|
39 |
+
details: Optional[Dict[str, Any]] = None,
|
40 |
+
):
|
41 |
+
self.test_name = test_name
|
42 |
+
self.status = status
|
43 |
+
self.duration = duration
|
44 |
+
self.error = error
|
45 |
+
self.details = details or {}
|
46 |
+
|
47 |
+
def dict(self):
|
48 |
+
return {
|
49 |
+
"test_name": self.test_name,
|
50 |
+
"status": self.status,
|
51 |
+
"duration": self.duration,
|
52 |
+
"error": self.error,
|
53 |
+
"details": self.details,
|
54 |
+
}
|
55 |
+
|
56 |
+
|
57 |
+
async def log_response(
|
58 |
+
response: httpx.Response, test_name: str
|
59 |
+
) -> None:
|
60 |
+
"""Log API response details"""
|
61 |
+
logger.debug(f"\n{test_name} Response:")
|
62 |
+
logger.debug(f"Status Code: {response.status_code}")
|
63 |
+
logger.debug(f"Headers: {dict(response.headers)}")
|
64 |
+
try:
|
65 |
+
logger.debug(f"Body: {response.json()}")
|
66 |
+
except json.JSONDecodeError:
|
67 |
+
logger.debug(f"Body: {response.text}")
|
68 |
+
|
69 |
+
|
70 |
+
async def create_test_user() -> TestResult:
|
71 |
+
"""Create a test user and get API key"""
|
72 |
+
start_time = datetime.now()
|
73 |
+
try:
|
74 |
+
async with httpx.AsyncClient() as client:
|
75 |
+
response = await client.post(
|
76 |
+
f"{TestConfig.BASE_URL}/users",
|
77 |
+
json={"username": TestConfig.TEST_USERNAME},
|
78 |
+
)
|
79 |
+
await log_response(response, "Create User")
|
80 |
+
|
81 |
+
if response.status_code == 200:
|
82 |
+
data = response.json()
|
83 |
+
TestConfig.api_key = data["api_key"]
|
84 |
+
TestConfig.user_id = UUID(data["user_id"])
|
85 |
+
return TestResult(
|
86 |
+
test_name="create_test_user",
|
87 |
+
status="passed",
|
88 |
+
duration=(
|
89 |
+
datetime.now() - start_time
|
90 |
+
).total_seconds(),
|
91 |
+
details={"user_id": str(TestConfig.user_id)},
|
92 |
+
)
|
93 |
+
else:
|
94 |
+
return TestResult(
|
95 |
+
test_name="create_test_user",
|
96 |
+
status="failed",
|
97 |
+
duration=(
|
98 |
+
datetime.now() - start_time
|
99 |
+
).total_seconds(),
|
100 |
+
error=f"Failed to create user: {response.text}",
|
101 |
+
)
|
102 |
+
except Exception as e:
|
103 |
+
logger.error(f"Error in create_test_user: {str(e)}")
|
104 |
+
return TestResult(
|
105 |
+
test_name="create_test_user",
|
106 |
+
status="error",
|
107 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
108 |
+
error=str(e),
|
109 |
+
)
|
110 |
+
|
111 |
+
|
112 |
+
async def create_test_agent() -> TestResult:
|
113 |
+
"""Create a test agent"""
|
114 |
+
start_time = datetime.now()
|
115 |
+
try:
|
116 |
+
# Create agent config according to the AgentConfig model
|
117 |
+
agent_config = {
|
118 |
+
"agent_name": "test_agent",
|
119 |
+
"model_name": "gpt-4",
|
120 |
+
"description": "Test agent for API testing",
|
121 |
+
"system_prompt": "You are a test agent.",
|
122 |
+
"temperature": 0.1,
|
123 |
+
"max_loops": 1,
|
124 |
+
"dynamic_temperature_enabled": True,
|
125 |
+
"user_name": TestConfig.TEST_USERNAME,
|
126 |
+
"retry_attempts": 1,
|
127 |
+
"context_length": 4000,
|
128 |
+
"output_type": "string",
|
129 |
+
"streaming_on": False,
|
130 |
+
"tags": ["test", "api"],
|
131 |
+
"stopping_token": "<DONE>",
|
132 |
+
"auto_generate_prompt": False,
|
133 |
+
}
|
134 |
+
|
135 |
+
async with httpx.AsyncClient() as client:
|
136 |
+
response = await client.post(
|
137 |
+
f"{TestConfig.BASE_URL}/agent",
|
138 |
+
json=agent_config,
|
139 |
+
headers={"api-key": TestConfig.api_key},
|
140 |
+
)
|
141 |
+
await log_response(response, "Create Agent")
|
142 |
+
|
143 |
+
if response.status_code == 200:
|
144 |
+
data = response.json()
|
145 |
+
TestConfig.test_agent_id = UUID(data["agent_id"])
|
146 |
+
return TestResult(
|
147 |
+
test_name="create_test_agent",
|
148 |
+
status="passed",
|
149 |
+
duration=(
|
150 |
+
datetime.now() - start_time
|
151 |
+
).total_seconds(),
|
152 |
+
details={
|
153 |
+
"agent_id": str(TestConfig.test_agent_id)
|
154 |
+
},
|
155 |
+
)
|
156 |
+
else:
|
157 |
+
return TestResult(
|
158 |
+
test_name="create_test_agent",
|
159 |
+
status="failed",
|
160 |
+
duration=(
|
161 |
+
datetime.now() - start_time
|
162 |
+
).total_seconds(),
|
163 |
+
error=f"Failed to create agent: {response.text}",
|
164 |
+
)
|
165 |
+
except Exception as e:
|
166 |
+
logger.error(f"Error in create_test_agent: {str(e)}")
|
167 |
+
return TestResult(
|
168 |
+
test_name="create_test_agent",
|
169 |
+
status="error",
|
170 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
171 |
+
error=str(e),
|
172 |
+
)
|
173 |
+
|
174 |
+
|
175 |
+
async def test_agent_completion() -> TestResult:
|
176 |
+
"""Test agent completion endpoint"""
|
177 |
+
start_time = datetime.now()
|
178 |
+
try:
|
179 |
+
completion_request = {
|
180 |
+
"prompt": "Hello, this is a test prompt.",
|
181 |
+
"agent_id": str(TestConfig.test_agent_id),
|
182 |
+
"max_tokens": 100,
|
183 |
+
"temperature_override": 0.5,
|
184 |
+
"stream": False,
|
185 |
+
}
|
186 |
+
|
187 |
+
async with httpx.AsyncClient() as client:
|
188 |
+
response = await client.post(
|
189 |
+
f"{TestConfig.BASE_URL}/agent/completions",
|
190 |
+
json=completion_request,
|
191 |
+
headers={"api-key": TestConfig.api_key},
|
192 |
+
)
|
193 |
+
await log_response(response, "Agent Completion")
|
194 |
+
|
195 |
+
if response.status_code == 200:
|
196 |
+
return TestResult(
|
197 |
+
test_name="test_agent_completion",
|
198 |
+
status="passed",
|
199 |
+
duration=(
|
200 |
+
datetime.now() - start_time
|
201 |
+
).total_seconds(),
|
202 |
+
details={"response": response.json()},
|
203 |
+
)
|
204 |
+
else:
|
205 |
+
return TestResult(
|
206 |
+
test_name="test_agent_completion",
|
207 |
+
status="failed",
|
208 |
+
duration=(
|
209 |
+
datetime.now() - start_time
|
210 |
+
).total_seconds(),
|
211 |
+
error=f"Failed completion test: {response.text}",
|
212 |
+
)
|
213 |
+
except Exception as e:
|
214 |
+
logger.error(f"Error in test_agent_completion: {str(e)}")
|
215 |
+
return TestResult(
|
216 |
+
test_name="test_agent_completion",
|
217 |
+
status="error",
|
218 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
219 |
+
error=str(e),
|
220 |
+
)
|
221 |
+
|
222 |
+
|
223 |
+
async def test_agent_metrics() -> TestResult:
|
224 |
+
"""Test agent metrics endpoint"""
|
225 |
+
start_time = datetime.now()
|
226 |
+
try:
|
227 |
+
if not TestConfig.test_agent_id:
|
228 |
+
return TestResult(
|
229 |
+
test_name="test_agent_metrics",
|
230 |
+
status="failed",
|
231 |
+
duration=(
|
232 |
+
datetime.now() - start_time
|
233 |
+
).total_seconds(),
|
234 |
+
error="No test agent ID available",
|
235 |
+
)
|
236 |
+
|
237 |
+
async with httpx.AsyncClient() as client:
|
238 |
+
response = await client.get(
|
239 |
+
f"{TestConfig.BASE_URL}/agent/{str(TestConfig.test_agent_id)}/metrics",
|
240 |
+
headers={"api-key": TestConfig.api_key},
|
241 |
+
)
|
242 |
+
await log_response(response, "Agent Metrics")
|
243 |
+
|
244 |
+
if response.status_code == 200:
|
245 |
+
return TestResult(
|
246 |
+
test_name="test_agent_metrics",
|
247 |
+
status="passed",
|
248 |
+
duration=(
|
249 |
+
datetime.now() - start_time
|
250 |
+
).total_seconds(),
|
251 |
+
details={"metrics": response.json()},
|
252 |
+
)
|
253 |
+
else:
|
254 |
+
return TestResult(
|
255 |
+
test_name="test_agent_metrics",
|
256 |
+
status="failed",
|
257 |
+
duration=(
|
258 |
+
datetime.now() - start_time
|
259 |
+
).total_seconds(),
|
260 |
+
error=f"Failed metrics test: {response.text}",
|
261 |
+
)
|
262 |
+
except Exception as e:
|
263 |
+
logger.error(f"Error in test_agent_metrics: {str(e)}")
|
264 |
+
return TestResult(
|
265 |
+
test_name="test_agent_metrics",
|
266 |
+
status="error",
|
267 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
268 |
+
error=str(e),
|
269 |
+
)
|
270 |
+
|
271 |
+
|
272 |
+
async def test_update_agent() -> TestResult:
|
273 |
+
"""Test agent update endpoint"""
|
274 |
+
start_time = datetime.now()
|
275 |
+
try:
|
276 |
+
if not TestConfig.test_agent_id:
|
277 |
+
return TestResult(
|
278 |
+
test_name="test_update_agent",
|
279 |
+
status="failed",
|
280 |
+
duration=(
|
281 |
+
datetime.now() - start_time
|
282 |
+
).total_seconds(),
|
283 |
+
error="No test agent ID available",
|
284 |
+
)
|
285 |
+
|
286 |
+
update_data = {
|
287 |
+
"description": "Updated test agent description",
|
288 |
+
"tags": ["test", "updated"],
|
289 |
+
"max_loops": 2,
|
290 |
+
}
|
291 |
+
|
292 |
+
async with httpx.AsyncClient() as client:
|
293 |
+
response = await client.patch(
|
294 |
+
f"{TestConfig.BASE_URL}/agent/{str(TestConfig.test_agent_id)}",
|
295 |
+
json=update_data,
|
296 |
+
headers={"api-key": TestConfig.api_key},
|
297 |
+
)
|
298 |
+
await log_response(response, "Update Agent")
|
299 |
+
|
300 |
+
if response.status_code == 200:
|
301 |
+
return TestResult(
|
302 |
+
test_name="test_update_agent",
|
303 |
+
status="passed",
|
304 |
+
duration=(
|
305 |
+
datetime.now() - start_time
|
306 |
+
).total_seconds(),
|
307 |
+
details={"update_response": response.json()},
|
308 |
+
)
|
309 |
+
else:
|
310 |
+
return TestResult(
|
311 |
+
test_name="test_update_agent",
|
312 |
+
status="failed",
|
313 |
+
duration=(
|
314 |
+
datetime.now() - start_time
|
315 |
+
).total_seconds(),
|
316 |
+
error=f"Failed update test: {response.text}",
|
317 |
+
)
|
318 |
+
except Exception as e:
|
319 |
+
logger.error(f"Error in test_update_agent: {str(e)}")
|
320 |
+
return TestResult(
|
321 |
+
test_name="test_update_agent",
|
322 |
+
status="error",
|
323 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
324 |
+
error=str(e),
|
325 |
+
)
|
326 |
+
|
327 |
+
|
328 |
+
async def test_error_handling() -> TestResult:
|
329 |
+
"""Test API error handling"""
|
330 |
+
start_time = datetime.now()
|
331 |
+
try:
|
332 |
+
async with httpx.AsyncClient() as client:
|
333 |
+
# Test with invalid API key
|
334 |
+
invalid_agent_id = "00000000-0000-0000-0000-000000000000"
|
335 |
+
response = await client.get(
|
336 |
+
f"{TestConfig.BASE_URL}/agent/{invalid_agent_id}/metrics",
|
337 |
+
headers={"api-key": "invalid_key"},
|
338 |
+
)
|
339 |
+
await log_response(response, "Invalid API Key Test")
|
340 |
+
|
341 |
+
if response.status_code in [401, 403]:
|
342 |
+
return TestResult(
|
343 |
+
test_name="test_error_handling",
|
344 |
+
status="passed",
|
345 |
+
duration=(
|
346 |
+
datetime.now() - start_time
|
347 |
+
).total_seconds(),
|
348 |
+
details={"error_response": response.json()},
|
349 |
+
)
|
350 |
+
else:
|
351 |
+
return TestResult(
|
352 |
+
test_name="test_error_handling",
|
353 |
+
status="failed",
|
354 |
+
duration=(
|
355 |
+
datetime.now() - start_time
|
356 |
+
).total_seconds(),
|
357 |
+
error="Error handling test failed",
|
358 |
+
)
|
359 |
+
except Exception as e:
|
360 |
+
logger.error(f"Error in test_error_handling: {str(e)}")
|
361 |
+
return TestResult(
|
362 |
+
test_name="test_error_handling",
|
363 |
+
status="error",
|
364 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
365 |
+
error=str(e),
|
366 |
+
)
|
367 |
+
|
368 |
+
|
369 |
+
async def cleanup_test_resources() -> TestResult:
|
370 |
+
"""Clean up test resources"""
|
371 |
+
start_time = datetime.now()
|
372 |
+
try:
|
373 |
+
if TestConfig.test_agent_id:
|
374 |
+
async with httpx.AsyncClient() as client:
|
375 |
+
response = await client.delete(
|
376 |
+
f"{TestConfig.BASE_URL}/agent/{str(TestConfig.test_agent_id)}",
|
377 |
+
headers={"api-key": TestConfig.api_key},
|
378 |
+
)
|
379 |
+
await log_response(response, "Delete Agent")
|
380 |
+
|
381 |
+
return TestResult(
|
382 |
+
test_name="cleanup_test_resources",
|
383 |
+
status="passed",
|
384 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
385 |
+
details={"cleanup": "completed"},
|
386 |
+
)
|
387 |
+
except Exception as e:
|
388 |
+
logger.error(f"Error in cleanup_test_resources: {str(e)}")
|
389 |
+
return TestResult(
|
390 |
+
test_name="cleanup_test_resources",
|
391 |
+
status="error",
|
392 |
+
duration=(datetime.now() - start_time).total_seconds(),
|
393 |
+
error=str(e),
|
394 |
+
)
|
395 |
+
|
396 |
+
|
397 |
+
async def run_all_tests() -> List[TestResult]:
|
398 |
+
"""Run all tests in sequence"""
|
399 |
+
logger.info("Starting API test suite")
|
400 |
+
results = []
|
401 |
+
|
402 |
+
# Initialize
|
403 |
+
results.append(await create_test_user())
|
404 |
+
if results[-1].status != "passed":
|
405 |
+
logger.error(
|
406 |
+
"Failed to create test user, aborting remaining tests"
|
407 |
+
)
|
408 |
+
return results
|
409 |
+
|
410 |
+
# Add delay to ensure user is properly created
|
411 |
+
await asyncio.sleep(1)
|
412 |
+
|
413 |
+
# Core tests
|
414 |
+
test_functions = [
|
415 |
+
create_test_agent,
|
416 |
+
test_agent_completion,
|
417 |
+
test_agent_metrics,
|
418 |
+
test_update_agent,
|
419 |
+
test_error_handling,
|
420 |
+
]
|
421 |
+
|
422 |
+
for test_func in test_functions:
|
423 |
+
result = await test_func()
|
424 |
+
results.append(result)
|
425 |
+
logger.info(f"Test {result.test_name}: {result.status}")
|
426 |
+
if result.error:
|
427 |
+
logger.error(
|
428 |
+
f"Error in {result.test_name}: {result.error}"
|
429 |
+
)
|
430 |
+
|
431 |
+
# Add small delay between tests
|
432 |
+
await asyncio.sleep(0.5)
|
433 |
+
|
434 |
+
# Cleanup
|
435 |
+
results.append(await cleanup_test_resources())
|
436 |
+
|
437 |
+
# Log summary
|
438 |
+
passed = sum(1 for r in results if r.status == "passed")
|
439 |
+
failed = sum(1 for r in results if r.status == "failed")
|
440 |
+
errors = sum(1 for r in results if r.status == "error")
|
441 |
+
|
442 |
+
logger.info("\nTest Summary:")
|
443 |
+
logger.info(f"Total Tests: {len(results)}")
|
444 |
+
logger.info(f"Passed: {passed}")
|
445 |
+
logger.info(f"Failed: {failed}")
|
446 |
+
logger.info(f"Errors: {errors}")
|
447 |
+
|
448 |
+
return results
|
449 |
+
|
450 |
+
|
451 |
+
def main():
|
452 |
+
"""Main entry point for running tests"""
|
453 |
+
logger.info("Starting API testing suite")
|
454 |
+
try:
|
455 |
+
results = asyncio.run(run_all_tests())
|
456 |
+
|
457 |
+
# Write results to JSON file
|
458 |
+
with open("test_results.json", "w") as f:
|
459 |
+
json.dump(
|
460 |
+
[result.dict() for result in results],
|
461 |
+
f,
|
462 |
+
indent=2,
|
463 |
+
default=str,
|
464 |
+
)
|
465 |
+
|
466 |
+
logger.info("Test results written to test_results.json")
|
467 |
+
|
468 |
+
except Exception:
|
469 |
+
logger.error("Fatal error in test suite: ")
|
470 |
+
|
471 |
+
|
472 |
+
main()
|
api/main.py
ADDED
@@ -0,0 +1,981 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import asyncio
|
2 |
+
import os
|
3 |
+
import secrets
|
4 |
+
import signal
|
5 |
+
import sys
|
6 |
+
import traceback
|
7 |
+
from concurrent.futures import ThreadPoolExecutor
|
8 |
+
from datetime import datetime, timedelta
|
9 |
+
from enum import Enum
|
10 |
+
from pathlib import Path
|
11 |
+
from typing import Any, AsyncGenerator, Dict, List, Optional
|
12 |
+
from uuid import UUID, uuid4
|
13 |
+
|
14 |
+
from fastapi.concurrency import asynccontextmanager
|
15 |
+
import uvicorn
|
16 |
+
from dotenv import load_dotenv
|
17 |
+
from fastapi import (
|
18 |
+
BackgroundTasks,
|
19 |
+
Depends,
|
20 |
+
FastAPI,
|
21 |
+
Header,
|
22 |
+
HTTPException,
|
23 |
+
Query,
|
24 |
+
Request,
|
25 |
+
status,
|
26 |
+
)
|
27 |
+
from fastapi.middleware.cors import CORSMiddleware
|
28 |
+
from loguru import logger
|
29 |
+
from pydantic import BaseModel, Field
|
30 |
+
|
31 |
+
from swarms.structs.agent import Agent
|
32 |
+
|
33 |
+
# Original API, drafting OpenTelemetry Integrations in this directory
|
34 |
+
|
35 |
+
# Load environment variables
|
36 |
+
load_dotenv()
|
37 |
+
|
38 |
+
|
39 |
+
class UvicornServer(uvicorn.Server):
|
40 |
+
"""Customized uvicorn server with graceful shutdown support"""
|
41 |
+
|
42 |
+
async def setup(self, sockets=None):
|
43 |
+
"""Setup the server"""
|
44 |
+
await super().setup(sockets)
|
45 |
+
|
46 |
+
async def shutdown(self, sockets=None):
|
47 |
+
"""Gracefully shutdown the server"""
|
48 |
+
logger.info("Shutting down server...")
|
49 |
+
await super().shutdown(sockets)
|
50 |
+
|
51 |
+
|
52 |
+
class AgentStatus(str, Enum):
|
53 |
+
"""Enum for agent status."""
|
54 |
+
|
55 |
+
IDLE = "idle"
|
56 |
+
PROCESSING = "processing"
|
57 |
+
ERROR = "error"
|
58 |
+
MAINTENANCE = "maintenance"
|
59 |
+
|
60 |
+
|
61 |
+
# Security configurations
|
62 |
+
API_KEY_LENGTH = 32 # Length of generated API keys
|
63 |
+
|
64 |
+
|
65 |
+
class APIKey(BaseModel):
|
66 |
+
key: str
|
67 |
+
name: str
|
68 |
+
created_at: datetime
|
69 |
+
last_used: datetime
|
70 |
+
is_active: bool = True
|
71 |
+
|
72 |
+
|
73 |
+
class APIKeyCreate(BaseModel):
|
74 |
+
name: str # A friendly name for the API key
|
75 |
+
|
76 |
+
|
77 |
+
class User(BaseModel):
|
78 |
+
id: UUID
|
79 |
+
username: str
|
80 |
+
is_active: bool = True
|
81 |
+
is_admin: bool = False
|
82 |
+
api_keys: Dict[str, APIKey] = Field(default_factory=dict)
|
83 |
+
|
84 |
+
def ensure_active_api_key(self) -> Optional[APIKey]:
|
85 |
+
"""Ensure user has at least one active API key."""
|
86 |
+
active_keys = [
|
87 |
+
key for key in self.api_keys.values() if key.is_active
|
88 |
+
]
|
89 |
+
if not active_keys:
|
90 |
+
return None
|
91 |
+
return active_keys[0]
|
92 |
+
|
93 |
+
|
94 |
+
class AgentConfig(BaseModel):
|
95 |
+
"""Configuration model for creating a new agent."""
|
96 |
+
|
97 |
+
agent_name: str = Field(..., description="Name of the agent")
|
98 |
+
model_name: str = Field(
|
99 |
+
...,
|
100 |
+
description="Name of the llm you want to use provided by litellm",
|
101 |
+
)
|
102 |
+
description: str = Field(
|
103 |
+
default="", description="Description of the agent's purpose"
|
104 |
+
)
|
105 |
+
system_prompt: str = Field(
|
106 |
+
..., description="System prompt for the agent"
|
107 |
+
)
|
108 |
+
model_name: str = Field(
|
109 |
+
default="gpt-4", description="Model name to use"
|
110 |
+
)
|
111 |
+
temperature: float = Field(
|
112 |
+
default=0.1,
|
113 |
+
ge=0.0,
|
114 |
+
le=2.0,
|
115 |
+
description="Temperature for the model",
|
116 |
+
)
|
117 |
+
max_loops: int = Field(
|
118 |
+
default=1, ge=1, description="Maximum number of loops"
|
119 |
+
)
|
120 |
+
dynamic_temperature_enabled: bool = Field(
|
121 |
+
default=True, description="Enable dynamic temperature"
|
122 |
+
)
|
123 |
+
user_name: str = Field(
|
124 |
+
default="default_user", description="Username for the agent"
|
125 |
+
)
|
126 |
+
retry_attempts: int = Field(
|
127 |
+
default=1, ge=1, description="Number of retry attempts"
|
128 |
+
)
|
129 |
+
context_length: int = Field(
|
130 |
+
default=200000, ge=1000, description="Context length"
|
131 |
+
)
|
132 |
+
output_type: str = Field(
|
133 |
+
default="string", description="Output type (string or json)"
|
134 |
+
)
|
135 |
+
streaming_on: bool = Field(
|
136 |
+
default=False, description="Enable streaming"
|
137 |
+
)
|
138 |
+
tags: List[str] = Field(
|
139 |
+
default_factory=list,
|
140 |
+
description="Tags for categorizing the agent",
|
141 |
+
)
|
142 |
+
stopping_token: str = Field(
|
143 |
+
default="<DONE>", description="Stopping token for the agent"
|
144 |
+
)
|
145 |
+
auto_generate_prompt: bool = Field(
|
146 |
+
default=False,
|
147 |
+
description="Auto-generate prompt based on agent details such as name, description, etc.",
|
148 |
+
)
|
149 |
+
|
150 |
+
|
151 |
+
class AgentUpdate(BaseModel):
|
152 |
+
"""Model for updating agent configuration."""
|
153 |
+
|
154 |
+
description: Optional[str] = None
|
155 |
+
system_prompt: Optional[str] = None
|
156 |
+
temperature: Optional[float] = 0.5
|
157 |
+
max_loops: Optional[int] = 1
|
158 |
+
tags: Optional[List[str]] = None
|
159 |
+
status: Optional[AgentStatus] = None
|
160 |
+
|
161 |
+
|
162 |
+
class AgentSummary(BaseModel):
|
163 |
+
"""Summary model for agent listing."""
|
164 |
+
|
165 |
+
agent_id: UUID
|
166 |
+
agent_name: str
|
167 |
+
description: str
|
168 |
+
system_prompt: str
|
169 |
+
created_at: datetime
|
170 |
+
last_used: datetime
|
171 |
+
total_completions: int
|
172 |
+
tags: List[str]
|
173 |
+
status: AgentStatus
|
174 |
+
|
175 |
+
|
176 |
+
class AgentMetrics(BaseModel):
|
177 |
+
"""Model for agent performance metrics."""
|
178 |
+
|
179 |
+
total_completions: int
|
180 |
+
average_response_time: float
|
181 |
+
error_rate: float
|
182 |
+
last_24h_completions: int
|
183 |
+
total_tokens_used: int
|
184 |
+
uptime_percentage: float
|
185 |
+
success_rate: float
|
186 |
+
peak_tokens_per_minute: int
|
187 |
+
|
188 |
+
|
189 |
+
class CompletionRequest(BaseModel):
|
190 |
+
"""Model for completion requests."""
|
191 |
+
|
192 |
+
prompt: str = Field(..., description="The prompt to process")
|
193 |
+
agent_id: UUID = Field(..., description="ID of the agent to use")
|
194 |
+
max_tokens: Optional[int] = Field(
|
195 |
+
None, description="Maximum tokens to generate"
|
196 |
+
)
|
197 |
+
temperature_override: Optional[float] = 0.5
|
198 |
+
stream: bool = Field(
|
199 |
+
default=False, description="Enable streaming response"
|
200 |
+
)
|
201 |
+
|
202 |
+
|
203 |
+
class CompletionResponse(BaseModel):
|
204 |
+
"""Model for completion responses."""
|
205 |
+
|
206 |
+
agent_id: UUID
|
207 |
+
response: str
|
208 |
+
metadata: Dict[str, Any]
|
209 |
+
timestamp: datetime
|
210 |
+
processing_time: float
|
211 |
+
token_usage: Dict[str, int]
|
212 |
+
|
213 |
+
|
214 |
+
class AgentStore:
|
215 |
+
"""Enhanced store for managing agents."""
|
216 |
+
|
217 |
+
def __init__(self):
|
218 |
+
self.agents: Dict[UUID, Agent] = {}
|
219 |
+
self.agent_metadata: Dict[UUID, Dict[str, Any]] = {}
|
220 |
+
self.users: Dict[UUID, User] = {} # user_id -> User
|
221 |
+
self.api_keys: Dict[str, UUID] = {} # api_key -> user_id
|
222 |
+
self.user_agents: Dict[UUID, List[UUID]] = (
|
223 |
+
{}
|
224 |
+
) # user_id -> [agent_ids]
|
225 |
+
self.executor = ThreadPoolExecutor(max_workers=4)
|
226 |
+
self._ensure_directories()
|
227 |
+
|
228 |
+
def _ensure_directories(self):
|
229 |
+
"""Ensure required directories exist."""
|
230 |
+
Path("logs").mkdir(exist_ok=True)
|
231 |
+
Path("states").mkdir(exist_ok=True)
|
232 |
+
|
233 |
+
def create_api_key(self, user_id: UUID, key_name: str) -> APIKey:
|
234 |
+
"""Create a new API key for a user."""
|
235 |
+
if user_id not in self.users:
|
236 |
+
raise HTTPException(
|
237 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
238 |
+
detail="User not found",
|
239 |
+
)
|
240 |
+
|
241 |
+
# Generate a secure random API key
|
242 |
+
api_key = secrets.token_urlsafe(API_KEY_LENGTH)
|
243 |
+
|
244 |
+
# Create the API key object
|
245 |
+
key_object = APIKey(
|
246 |
+
key=api_key,
|
247 |
+
name=key_name,
|
248 |
+
created_at=datetime.utcnow(),
|
249 |
+
last_used=datetime.utcnow(),
|
250 |
+
)
|
251 |
+
|
252 |
+
# Store the API key
|
253 |
+
self.users[user_id].api_keys[api_key] = key_object
|
254 |
+
self.api_keys[api_key] = user_id
|
255 |
+
|
256 |
+
return key_object
|
257 |
+
|
258 |
+
async def verify_agent_access(
|
259 |
+
self, agent_id: UUID, user_id: UUID
|
260 |
+
) -> bool:
|
261 |
+
"""Verify if a user has access to an agent."""
|
262 |
+
if agent_id not in self.agents:
|
263 |
+
return False
|
264 |
+
return (
|
265 |
+
self.agent_metadata[agent_id]["owner_id"] == user_id
|
266 |
+
or self.users[user_id].is_admin
|
267 |
+
)
|
268 |
+
|
269 |
+
async def create_agent(
|
270 |
+
self, config: AgentConfig, user_id: UUID
|
271 |
+
) -> UUID:
|
272 |
+
"""Create a new agent with the given configuration."""
|
273 |
+
try:
|
274 |
+
|
275 |
+
agent = Agent(
|
276 |
+
agent_name=config.agent_name,
|
277 |
+
system_prompt=config.system_prompt,
|
278 |
+
model_name=config.model_name,
|
279 |
+
max_loops=config.max_loops,
|
280 |
+
verbose=config.verbose,
|
281 |
+
dynamic_temperature_enabled=True,
|
282 |
+
user_name=config.user_name,
|
283 |
+
retry_attempts=config.retry_attempts,
|
284 |
+
context_length=config.context_length,
|
285 |
+
return_step_meta=False,
|
286 |
+
output_type="str",
|
287 |
+
streaming_on=config.streaming_on,
|
288 |
+
stopping_token=config.stopping_token,
|
289 |
+
auto_generate_prompt=config.auto_generate_prompt,
|
290 |
+
)
|
291 |
+
|
292 |
+
agent_id = uuid4()
|
293 |
+
self.agents[agent_id] = agent
|
294 |
+
self.agent_metadata[agent_id] = {
|
295 |
+
"description": config.description,
|
296 |
+
"created_at": datetime.utcnow(),
|
297 |
+
"last_used": datetime.utcnow(),
|
298 |
+
"total_completions": 0,
|
299 |
+
"tags": config.tags,
|
300 |
+
"total_tokens": 0,
|
301 |
+
"error_count": 0,
|
302 |
+
"response_times": [],
|
303 |
+
"status": AgentStatus.IDLE,
|
304 |
+
"start_time": datetime.utcnow(),
|
305 |
+
"downtime": timedelta(),
|
306 |
+
"successful_completions": 0,
|
307 |
+
}
|
308 |
+
|
309 |
+
# Add to user's agents list
|
310 |
+
if user_id not in self.user_agents:
|
311 |
+
self.user_agents[user_id] = []
|
312 |
+
self.user_agents[user_id].append(agent_id)
|
313 |
+
|
314 |
+
return agent_id
|
315 |
+
|
316 |
+
except Exception as e:
|
317 |
+
logger.error(f"Error creating agent: {str(e)}")
|
318 |
+
raise HTTPException(
|
319 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
320 |
+
detail=f"Failed to create agent: {str(e)}",
|
321 |
+
)
|
322 |
+
|
323 |
+
async def get_agent(self, agent_id: UUID) -> Agent:
|
324 |
+
"""Retrieve an agent by ID."""
|
325 |
+
agent = self.agents.get(agent_id)
|
326 |
+
if not agent:
|
327 |
+
logger.error(f"Agent not found: {agent_id}")
|
328 |
+
raise HTTPException(
|
329 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
330 |
+
detail=f"Agent {agent_id} not found",
|
331 |
+
)
|
332 |
+
return agent
|
333 |
+
|
334 |
+
async def update_agent(
|
335 |
+
self, agent_id: UUID, update: AgentUpdate
|
336 |
+
) -> None:
|
337 |
+
"""Update agent configuration."""
|
338 |
+
agent = await self.get_agent(agent_id)
|
339 |
+
metadata = self.agent_metadata[agent_id]
|
340 |
+
|
341 |
+
if update.system_prompt:
|
342 |
+
agent.system_prompt = update.system_prompt
|
343 |
+
if update.max_loops is not None:
|
344 |
+
agent.max_loops = update.max_loops
|
345 |
+
if update.tags is not None:
|
346 |
+
metadata["tags"] = update.tags
|
347 |
+
if update.description is not None:
|
348 |
+
metadata["description"] = update.description
|
349 |
+
if update.status is not None:
|
350 |
+
metadata["status"] = update.status
|
351 |
+
if update.status == AgentStatus.MAINTENANCE:
|
352 |
+
metadata["downtime"] += (
|
353 |
+
datetime.utcnow() - metadata["last_used"]
|
354 |
+
)
|
355 |
+
|
356 |
+
logger.info(f"Updated agent {agent_id}")
|
357 |
+
|
358 |
+
def ensure_user_api_key(self, user_id: UUID) -> APIKey:
|
359 |
+
"""Ensure user has at least one active API key."""
|
360 |
+
if user_id not in self.users:
|
361 |
+
raise HTTPException(
|
362 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
363 |
+
detail="User not found",
|
364 |
+
)
|
365 |
+
|
366 |
+
user = self.users[user_id]
|
367 |
+
existing_key = user.ensure_active_api_key()
|
368 |
+
if existing_key:
|
369 |
+
return existing_key
|
370 |
+
|
371 |
+
# Create new API key if none exists
|
372 |
+
return self.create_api_key(user_id, "Default Key")
|
373 |
+
|
374 |
+
def validate_api_key(self, api_key: str) -> Optional[UUID]:
|
375 |
+
"""Validate an API key and return the associated user ID."""
|
376 |
+
if not api_key:
|
377 |
+
return None
|
378 |
+
|
379 |
+
user_id = self.api_keys.get(api_key)
|
380 |
+
if not user_id or api_key not in self.users[user_id].api_keys:
|
381 |
+
return None
|
382 |
+
|
383 |
+
key_object = self.users[user_id].api_keys[api_key]
|
384 |
+
if not key_object.is_active:
|
385 |
+
return None
|
386 |
+
|
387 |
+
# Update last used timestamp
|
388 |
+
key_object.last_used = datetime.utcnow()
|
389 |
+
return user_id
|
390 |
+
|
391 |
+
async def list_agents(
|
392 |
+
self,
|
393 |
+
tags: Optional[List[str]] = None,
|
394 |
+
status: Optional[AgentStatus] = None,
|
395 |
+
) -> List[AgentSummary]:
|
396 |
+
"""List all agents, optionally filtered by tags and status."""
|
397 |
+
summaries = []
|
398 |
+
for agent_id, agent in self.agents.items():
|
399 |
+
metadata = self.agent_metadata[agent_id]
|
400 |
+
|
401 |
+
# Apply filters
|
402 |
+
if tags and not any(
|
403 |
+
tag in metadata["tags"] for tag in tags
|
404 |
+
):
|
405 |
+
continue
|
406 |
+
if status and metadata["status"] != status:
|
407 |
+
continue
|
408 |
+
|
409 |
+
summaries.append(
|
410 |
+
AgentSummary(
|
411 |
+
agent_id=agent_id,
|
412 |
+
agent_name=agent.agent_name,
|
413 |
+
system_prompt=agent.system_prompt,
|
414 |
+
description=metadata["description"],
|
415 |
+
created_at=metadata["created_at"],
|
416 |
+
last_used=metadata["last_used"],
|
417 |
+
total_completions=metadata["total_completions"],
|
418 |
+
tags=metadata["tags"],
|
419 |
+
status=metadata["status"],
|
420 |
+
)
|
421 |
+
)
|
422 |
+
return summaries
|
423 |
+
|
424 |
+
async def get_agent_metrics(self, agent_id: UUID) -> AgentMetrics:
|
425 |
+
"""Get performance metrics for an agent."""
|
426 |
+
metadata = self.agent_metadata[agent_id]
|
427 |
+
response_times = metadata["response_times"]
|
428 |
+
|
429 |
+
# Calculate metrics
|
430 |
+
total_time = datetime.utcnow() - metadata["start_time"]
|
431 |
+
uptime = total_time - metadata["downtime"]
|
432 |
+
uptime_percentage = (
|
433 |
+
uptime.total_seconds() / total_time.total_seconds()
|
434 |
+
) * 100
|
435 |
+
|
436 |
+
success_rate = (
|
437 |
+
metadata["successful_completions"]
|
438 |
+
/ metadata["total_completions"]
|
439 |
+
* 100
|
440 |
+
if metadata["total_completions"] > 0
|
441 |
+
else 0
|
442 |
+
)
|
443 |
+
|
444 |
+
return AgentMetrics(
|
445 |
+
total_completions=metadata["total_completions"],
|
446 |
+
average_response_time=(
|
447 |
+
sum(response_times) / len(response_times)
|
448 |
+
if response_times
|
449 |
+
else 0
|
450 |
+
),
|
451 |
+
error_rate=(
|
452 |
+
metadata["error_count"]
|
453 |
+
/ metadata["total_completions"]
|
454 |
+
if metadata["total_completions"] > 0
|
455 |
+
else 0
|
456 |
+
),
|
457 |
+
last_24h_completions=sum(
|
458 |
+
1
|
459 |
+
for t in response_times
|
460 |
+
if (datetime.utcnow() - t).days < 1
|
461 |
+
),
|
462 |
+
total_tokens_used=metadata["total_tokens"],
|
463 |
+
uptime_percentage=uptime_percentage,
|
464 |
+
success_rate=success_rate,
|
465 |
+
peak_tokens_per_minute=max(
|
466 |
+
metadata.get("tokens_per_minute", [0])
|
467 |
+
),
|
468 |
+
)
|
469 |
+
|
470 |
+
async def clone_agent(
|
471 |
+
self, agent_id: UUID, new_name: str
|
472 |
+
) -> UUID:
|
473 |
+
"""Clone an existing agent with a new name."""
|
474 |
+
original_agent = await self.get_agent(agent_id)
|
475 |
+
original_metadata = self.agent_metadata[agent_id]
|
476 |
+
|
477 |
+
config = AgentConfig(
|
478 |
+
agent_name=new_name,
|
479 |
+
description=f"Clone of {original_agent.agent_name}",
|
480 |
+
system_prompt=original_agent.system_prompt,
|
481 |
+
model_name=original_agent.model_name,
|
482 |
+
temperature=0.5,
|
483 |
+
max_loops=original_agent.max_loops,
|
484 |
+
tags=original_metadata["tags"],
|
485 |
+
)
|
486 |
+
|
487 |
+
return await self.create_agent(config)
|
488 |
+
|
489 |
+
async def delete_agent(self, agent_id: UUID) -> None:
|
490 |
+
"""Delete an agent."""
|
491 |
+
if agent_id not in self.agents:
|
492 |
+
raise HTTPException(
|
493 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
494 |
+
detail=f"Agent {agent_id} not found",
|
495 |
+
)
|
496 |
+
|
497 |
+
# Clean up any resources
|
498 |
+
agent = self.agents[agent_id]
|
499 |
+
if agent.autosave and os.path.exists(agent.saved_state_path):
|
500 |
+
os.remove(agent.saved_state_path)
|
501 |
+
|
502 |
+
del self.agents[agent_id]
|
503 |
+
del self.agent_metadata[agent_id]
|
504 |
+
logger.info(f"Deleted agent {agent_id}")
|
505 |
+
|
506 |
+
async def process_completion(
|
507 |
+
self,
|
508 |
+
agent: Agent,
|
509 |
+
prompt: str,
|
510 |
+
agent_id: UUID,
|
511 |
+
max_tokens: Optional[int] = None,
|
512 |
+
temperature_override: Optional[float] = None,
|
513 |
+
) -> CompletionResponse:
|
514 |
+
"""Process a completion request using the specified agent."""
|
515 |
+
start_time = datetime.utcnow()
|
516 |
+
metadata = self.agent_metadata[agent_id]
|
517 |
+
|
518 |
+
try:
|
519 |
+
# Update agent status
|
520 |
+
metadata["status"] = AgentStatus.PROCESSING
|
521 |
+
metadata["last_used"] = start_time
|
522 |
+
|
523 |
+
# Process the completion
|
524 |
+
response = agent.run(prompt)
|
525 |
+
|
526 |
+
# Update metrics
|
527 |
+
processing_time = (
|
528 |
+
datetime.utcnow() - start_time
|
529 |
+
).total_seconds()
|
530 |
+
metadata["response_times"].append(processing_time)
|
531 |
+
metadata["total_completions"] += 1
|
532 |
+
metadata["successful_completions"] += 1
|
533 |
+
|
534 |
+
# Estimate token usage (this is a rough estimate)
|
535 |
+
prompt_tokens = len(prompt.split()) * 1.3
|
536 |
+
completion_tokens = len(response.split()) * 1.3
|
537 |
+
total_tokens = int(prompt_tokens + completion_tokens)
|
538 |
+
metadata["total_tokens"] += total_tokens
|
539 |
+
|
540 |
+
# Update tokens per minute tracking
|
541 |
+
current_minute = datetime.utcnow().replace(
|
542 |
+
second=0, microsecond=0
|
543 |
+
)
|
544 |
+
if "tokens_per_minute" not in metadata:
|
545 |
+
metadata["tokens_per_minute"] = {}
|
546 |
+
metadata["tokens_per_minute"][current_minute] = (
|
547 |
+
metadata["tokens_per_minute"].get(current_minute, 0)
|
548 |
+
+ total_tokens
|
549 |
+
)
|
550 |
+
|
551 |
+
return CompletionResponse(
|
552 |
+
agent_id=agent_id,
|
553 |
+
response=response,
|
554 |
+
metadata={
|
555 |
+
"agent_name": agent.agent_name,
|
556 |
+
# "model_name": agent.llm.model_name,
|
557 |
+
# "temperature": 0.5,
|
558 |
+
},
|
559 |
+
timestamp=datetime.utcnow(),
|
560 |
+
processing_time=processing_time,
|
561 |
+
token_usage={
|
562 |
+
"prompt_tokens": int(prompt_tokens),
|
563 |
+
"completion_tokens": int(completion_tokens),
|
564 |
+
"total_tokens": total_tokens,
|
565 |
+
},
|
566 |
+
)
|
567 |
+
|
568 |
+
except Exception as e:
|
569 |
+
metadata["error_count"] += 1
|
570 |
+
metadata["status"] = AgentStatus.ERROR
|
571 |
+
logger.error(
|
572 |
+
f"Error in completion processing: {str(e)}\n{traceback.format_exc()}"
|
573 |
+
)
|
574 |
+
raise HTTPException(
|
575 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
576 |
+
detail=f"Error processing completion: {str(e)}",
|
577 |
+
)
|
578 |
+
finally:
|
579 |
+
metadata["status"] = AgentStatus.IDLE
|
580 |
+
|
581 |
+
|
582 |
+
class StoreManager:
|
583 |
+
_instance = None
|
584 |
+
|
585 |
+
@classmethod
|
586 |
+
def get_instance(cls) -> "AgentStore":
|
587 |
+
if cls._instance is None:
|
588 |
+
cls._instance = AgentStore()
|
589 |
+
return cls._instance
|
590 |
+
|
591 |
+
|
592 |
+
# Modify the dependency function
|
593 |
+
def get_store() -> AgentStore:
|
594 |
+
"""Dependency to get the AgentStore instance."""
|
595 |
+
return StoreManager.get_instance()
|
596 |
+
|
597 |
+
|
598 |
+
# Modify the get_current_user dependency
|
599 |
+
async def get_current_user(
|
600 |
+
api_key: str = Header(
|
601 |
+
..., description="API key for authentication"
|
602 |
+
),
|
603 |
+
store: AgentStore = Depends(get_store),
|
604 |
+
) -> User:
|
605 |
+
"""Validate API key and return current user."""
|
606 |
+
if not api_key:
|
607 |
+
raise HTTPException(
|
608 |
+
status_code=status.HTTP_401_UNAUTHORIZED,
|
609 |
+
detail="API key is required",
|
610 |
+
headers={"WWW-Authenticate": "ApiKey"},
|
611 |
+
)
|
612 |
+
|
613 |
+
user_id = store.validate_api_key(api_key)
|
614 |
+
if not user_id:
|
615 |
+
raise HTTPException(
|
616 |
+
status_code=status.HTTP_401_UNAUTHORIZED,
|
617 |
+
detail="Invalid or expired API key",
|
618 |
+
headers={"WWW-Authenticate": "ApiKey"},
|
619 |
+
)
|
620 |
+
|
621 |
+
user = store.users.get(user_id)
|
622 |
+
if not user:
|
623 |
+
raise HTTPException(
|
624 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
625 |
+
detail="User not found",
|
626 |
+
)
|
627 |
+
|
628 |
+
if not user.ensure_active_api_key():
|
629 |
+
# Attempt to create new API key
|
630 |
+
store.ensure_user_api_key(user_id)
|
631 |
+
|
632 |
+
return user
|
633 |
+
|
634 |
+
|
635 |
+
class SwarmsAPI:
|
636 |
+
"""Enhanced API class for Swarms agent integration."""
|
637 |
+
|
638 |
+
def __init__(self):
|
639 |
+
self.app = FastAPI(
|
640 |
+
title="Swarms Agent API",
|
641 |
+
description="Production-grade API for Swarms agent interaction",
|
642 |
+
version="1.0.0",
|
643 |
+
docs_url="/v1/docs",
|
644 |
+
redoc_url="/v1/redoc",
|
645 |
+
)
|
646 |
+
# Initialize the store using the singleton manager
|
647 |
+
self.store = StoreManager.get_instance()
|
648 |
+
|
649 |
+
# Configure CORS
|
650 |
+
self.app.add_middleware(
|
651 |
+
CORSMiddleware,
|
652 |
+
allow_origins=[
|
653 |
+
"*"
|
654 |
+
], # Configure appropriately for production
|
655 |
+
allow_credentials=True,
|
656 |
+
allow_methods=["*"],
|
657 |
+
allow_headers=["*"],
|
658 |
+
)
|
659 |
+
|
660 |
+
self._setup_routes()
|
661 |
+
|
662 |
+
def _setup_routes(self):
|
663 |
+
"""Set up API routes."""
|
664 |
+
|
665 |
+
# In your API code
|
666 |
+
|
667 |
+
# Modify the create_user endpoint
|
668 |
+
@self.app.post("/v1/users", response_model=Dict[str, Any])
|
669 |
+
async def create_user(request: Request):
|
670 |
+
"""Create a new user and initial API key."""
|
671 |
+
try:
|
672 |
+
body = await request.json()
|
673 |
+
username = body.get("username")
|
674 |
+
if not username or len(username) < 3:
|
675 |
+
raise HTTPException(
|
676 |
+
status_code=400, detail="Invalid username"
|
677 |
+
)
|
678 |
+
|
679 |
+
user_id = uuid4()
|
680 |
+
user = User(id=user_id, username=username)
|
681 |
+
self.store.users[user_id] = user
|
682 |
+
|
683 |
+
# Always create initial API key
|
684 |
+
initial_key = self.store.create_api_key(
|
685 |
+
user_id, "Initial Key"
|
686 |
+
)
|
687 |
+
if not initial_key:
|
688 |
+
raise HTTPException(
|
689 |
+
status_code=500,
|
690 |
+
detail="Failed to create initial API key",
|
691 |
+
)
|
692 |
+
|
693 |
+
return {
|
694 |
+
"user_id": user_id,
|
695 |
+
"api_key": initial_key.key,
|
696 |
+
}
|
697 |
+
except Exception as e:
|
698 |
+
logger.error(f"Error creating user: {str(e)}")
|
699 |
+
raise HTTPException(status_code=400, detail=str(e))
|
700 |
+
|
701 |
+
@self.app.get(
|
702 |
+
"/v1/users/{user_id}/api-keys",
|
703 |
+
response_model=List[APIKey],
|
704 |
+
)
|
705 |
+
async def list_api_keys(
|
706 |
+
user_id: UUID,
|
707 |
+
current_user: User = Depends(get_current_user),
|
708 |
+
):
|
709 |
+
"""List all API keys for a user."""
|
710 |
+
if (
|
711 |
+
current_user.id != user_id
|
712 |
+
and not current_user.is_admin
|
713 |
+
):
|
714 |
+
raise HTTPException(
|
715 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
716 |
+
detail="Not authorized to view API keys for this user",
|
717 |
+
)
|
718 |
+
|
719 |
+
return list(self.store.users[user_id].api_keys.values())
|
720 |
+
|
721 |
+
@self.app.delete("/v1/users/{user_id}/api-keys/{key}")
|
722 |
+
async def revoke_api_key(
|
723 |
+
user_id: UUID,
|
724 |
+
key: str,
|
725 |
+
current_user: User = Depends(get_current_user),
|
726 |
+
):
|
727 |
+
"""Revoke an API key."""
|
728 |
+
if (
|
729 |
+
current_user.id != user_id
|
730 |
+
and not current_user.is_admin
|
731 |
+
):
|
732 |
+
raise HTTPException(
|
733 |
+
status_code=status.HTTP_403_FORBIDDEN,
|
734 |
+
detail="Not authorized to revoke API keys for this user",
|
735 |
+
)
|
736 |
+
|
737 |
+
if key in self.store.users[user_id].api_keys:
|
738 |
+
self.store.users[user_id].api_keys[
|
739 |
+
key
|
740 |
+
].is_active = False
|
741 |
+
del self.store.api_keys[key]
|
742 |
+
return {"status": "API key revoked"}
|
743 |
+
|
744 |
+
raise HTTPException(
|
745 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
746 |
+
detail="API key not found",
|
747 |
+
)
|
748 |
+
|
749 |
+
@self.app.get(
|
750 |
+
"/v1/users/me/agents", response_model=List[AgentSummary]
|
751 |
+
)
|
752 |
+
async def list_user_agents(
|
753 |
+
current_user: User = Depends(get_current_user),
|
754 |
+
tags: Optional[List[str]] = Query(None),
|
755 |
+
status: Optional[AgentStatus] = None,
|
756 |
+
):
|
757 |
+
"""List all agents owned by the current user."""
|
758 |
+
user_agents = self.store.user_agents.get(
|
759 |
+
current_user.id, []
|
760 |
+
)
|
761 |
+
return [
|
762 |
+
agent
|
763 |
+
for agent in await self.store.list_agents(
|
764 |
+
tags, status
|
765 |
+
)
|
766 |
+
if agent.agent_id in user_agents
|
767 |
+
]
|
768 |
+
|
769 |
+
# Modify existing routes to use API key authentication
|
770 |
+
@self.app.post("/v1/agent", response_model=Dict[str, UUID])
|
771 |
+
async def create_agent(
|
772 |
+
config: AgentConfig,
|
773 |
+
current_user: User = Depends(get_current_user),
|
774 |
+
):
|
775 |
+
"""Create a new agent with the specified configuration."""
|
776 |
+
agent_id = await self.store.create_agent(
|
777 |
+
config, current_user.id
|
778 |
+
)
|
779 |
+
return {"agent_id": agent_id}
|
780 |
+
|
781 |
+
@self.app.get("/v1/agents", response_model=List[AgentSummary])
|
782 |
+
async def list_agents(
|
783 |
+
tags: Optional[List[str]] = Query(None),
|
784 |
+
status: Optional[AgentStatus] = None,
|
785 |
+
):
|
786 |
+
"""List all agents, optionally filtered by tags and status."""
|
787 |
+
return await self.store.list_agents(tags, status)
|
788 |
+
|
789 |
+
@self.app.patch(
|
790 |
+
"/v1/agent/{agent_id}", response_model=Dict[str, str]
|
791 |
+
)
|
792 |
+
async def update_agent(agent_id: UUID, update: AgentUpdate):
|
793 |
+
"""Update an existing agent's configuration."""
|
794 |
+
await self.store.update_agent(agent_id, update)
|
795 |
+
return {"status": "updated"}
|
796 |
+
|
797 |
+
@self.app.get(
|
798 |
+
"/v1/agent/{agent_id}/metrics",
|
799 |
+
response_model=AgentMetrics,
|
800 |
+
)
|
801 |
+
async def get_agent_metrics(agent_id: UUID):
|
802 |
+
"""Get performance metrics for a specific agent."""
|
803 |
+
return await self.store.get_agent_metrics(agent_id)
|
804 |
+
|
805 |
+
@self.app.post(
|
806 |
+
"/v1/agent/{agent_id}/clone",
|
807 |
+
response_model=Dict[str, UUID],
|
808 |
+
)
|
809 |
+
async def clone_agent(agent_id: UUID, new_name: str):
|
810 |
+
"""Clone an existing agent with a new name."""
|
811 |
+
new_id = await self.store.clone_agent(agent_id, new_name)
|
812 |
+
return {"agent_id": new_id}
|
813 |
+
|
814 |
+
@self.app.delete("/v1/agent/{agent_id}")
|
815 |
+
async def delete_agent(agent_id: UUID):
|
816 |
+
"""Delete an agent."""
|
817 |
+
await self.store.delete_agent(agent_id)
|
818 |
+
return {"status": "deleted"}
|
819 |
+
|
820 |
+
@self.app.post(
|
821 |
+
"/v1/agent/completions", response_model=CompletionResponse
|
822 |
+
)
|
823 |
+
async def create_completion(
|
824 |
+
request: CompletionRequest,
|
825 |
+
background_tasks: BackgroundTasks,
|
826 |
+
):
|
827 |
+
"""Process a completion request with the specified agent."""
|
828 |
+
try:
|
829 |
+
agent = await self.store.get_agent(request.agent_id)
|
830 |
+
|
831 |
+
# Process completion
|
832 |
+
response = await self.store.process_completion(
|
833 |
+
agent,
|
834 |
+
request.prompt,
|
835 |
+
request.agent_id,
|
836 |
+
request.max_tokens,
|
837 |
+
0.5,
|
838 |
+
)
|
839 |
+
|
840 |
+
# Schedule background cleanup
|
841 |
+
background_tasks.add_task(
|
842 |
+
self._cleanup_old_metrics, request.agent_id
|
843 |
+
)
|
844 |
+
|
845 |
+
return response
|
846 |
+
|
847 |
+
except Exception as e:
|
848 |
+
logger.error(f"Error processing completion: {str(e)}")
|
849 |
+
raise HTTPException(
|
850 |
+
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
851 |
+
detail=f"Error processing completion: {str(e)}",
|
852 |
+
)
|
853 |
+
|
854 |
+
@self.app.get("/v1/agent/{agent_id}/status")
|
855 |
+
async def get_agent_status(agent_id: UUID):
|
856 |
+
"""Get the current status of an agent."""
|
857 |
+
metadata = self.store.agent_metadata.get(agent_id)
|
858 |
+
if not metadata:
|
859 |
+
raise HTTPException(
|
860 |
+
status_code=status.HTTP_404_NOT_FOUND,
|
861 |
+
detail=f"Agent {agent_id} not found",
|
862 |
+
)
|
863 |
+
return {
|
864 |
+
"agent_id": agent_id,
|
865 |
+
"status": metadata["status"],
|
866 |
+
"last_used": metadata["last_used"],
|
867 |
+
"total_completions": metadata["total_completions"],
|
868 |
+
"error_count": metadata["error_count"],
|
869 |
+
}
|
870 |
+
|
871 |
+
async def _cleanup_old_metrics(self, agent_id: UUID):
|
872 |
+
"""Clean up old metrics data to prevent memory bloat."""
|
873 |
+
metadata = self.store.agent_metadata.get(agent_id)
|
874 |
+
if metadata:
|
875 |
+
# Keep only last 24 hours of response times
|
876 |
+
cutoff = datetime.utcnow() - timedelta(days=1)
|
877 |
+
metadata["response_times"] = [
|
878 |
+
t
|
879 |
+
for t in metadata["response_times"]
|
880 |
+
if isinstance(t, (int, float))
|
881 |
+
and t > cutoff.timestamp()
|
882 |
+
]
|
883 |
+
|
884 |
+
# Clean up old tokens per minute data
|
885 |
+
if "tokens_per_minute" in metadata:
|
886 |
+
metadata["tokens_per_minute"] = {
|
887 |
+
k: v
|
888 |
+
for k, v in metadata["tokens_per_minute"].items()
|
889 |
+
if k > cutoff
|
890 |
+
}
|
891 |
+
|
892 |
+
|
893 |
+
class APIServer:
|
894 |
+
def __init__(
|
895 |
+
self, app: FastAPI, host: str = "0.0.0.0", port: int = 8000
|
896 |
+
):
|
897 |
+
self.app = app
|
898 |
+
self.host = host
|
899 |
+
self.port = port
|
900 |
+
self.config = uvicorn.Config(
|
901 |
+
app=app,
|
902 |
+
host=host,
|
903 |
+
port=port,
|
904 |
+
log_level="info",
|
905 |
+
access_log=True,
|
906 |
+
workers=os.cpu_count() * 2,
|
907 |
+
)
|
908 |
+
self.server = UvicornServer(config=self.config)
|
909 |
+
|
910 |
+
# Setup signal handlers
|
911 |
+
signal.signal(signal.SIGTERM, self._handle_signal)
|
912 |
+
signal.signal(signal.SIGINT, self._handle_signal)
|
913 |
+
|
914 |
+
def _handle_signal(self, signum, frame):
|
915 |
+
"""Handle shutdown signals"""
|
916 |
+
logger.info(f"Received signal {signum}")
|
917 |
+
asyncio.create_task(self.shutdown())
|
918 |
+
|
919 |
+
async def startup(self) -> None:
|
920 |
+
"""Start the server"""
|
921 |
+
try:
|
922 |
+
logger.info(
|
923 |
+
f"Starting API server on http://{self.host}:{self.port}"
|
924 |
+
)
|
925 |
+
print(
|
926 |
+
f"Starting API server on http://{self.host}:{self.port}"
|
927 |
+
)
|
928 |
+
await self.server.serve()
|
929 |
+
except Exception as e:
|
930 |
+
logger.error(f"Failed to start server: {str(e)}")
|
931 |
+
raise
|
932 |
+
|
933 |
+
async def shutdown(self) -> None:
|
934 |
+
"""Shutdown the server"""
|
935 |
+
try:
|
936 |
+
logger.info("Initiating graceful shutdown...")
|
937 |
+
await self.server.shutdown()
|
938 |
+
except Exception as e:
|
939 |
+
logger.error(f"Error during shutdown: {str(e)}")
|
940 |
+
raise
|
941 |
+
|
942 |
+
|
943 |
+
@asynccontextmanager
|
944 |
+
async def lifespan(app: FastAPI) -> AsyncGenerator:
|
945 |
+
"""Lifespan context manager for the FastAPI app"""
|
946 |
+
# Startup
|
947 |
+
logger.info("Starting up API server...")
|
948 |
+
yield
|
949 |
+
# Shutdown
|
950 |
+
logger.info("Shutting down API server...")
|
951 |
+
|
952 |
+
|
953 |
+
def create_app() -> FastAPI:
|
954 |
+
"""Create and configure the FastAPI application"""
|
955 |
+
logger.info("Creating FastAPI application")
|
956 |
+
api = SwarmsAPI()
|
957 |
+
app = api.app
|
958 |
+
|
959 |
+
# Add lifespan handling
|
960 |
+
app.router.lifespan_context = lifespan
|
961 |
+
|
962 |
+
logger.info("FastAPI application created successfully")
|
963 |
+
return app
|
964 |
+
|
965 |
+
|
966 |
+
def run_server():
|
967 |
+
"""Run the API server"""
|
968 |
+
try:
|
969 |
+
# Create the FastAPI app
|
970 |
+
app = create_app()
|
971 |
+
|
972 |
+
# Create and run the server
|
973 |
+
server = APIServer(app)
|
974 |
+
asyncio.run(server.startup())
|
975 |
+
except Exception as e:
|
976 |
+
logger.error(f"Failed to start API: {str(e)}")
|
977 |
+
print(f"Error starting server: {str(e)}"
|
978 |
+
|
979 |
+
|
980 |
+
if __name__ == "__main__":
|
981 |
+
run_server()
|
api/requirements.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
fastapi
|
2 |
+
uvicorn
|
3 |
+
pydantic
|
4 |
+
loguru
|
5 |
+
python-dotenv
|
6 |
+
swarms # Specify the version or source if it's not on PyPI
|
7 |
+
opentelemetry-api
|
8 |
+
opentelemetry-sdk
|
9 |
+
opentelemetry-instrumentation-fastapi
|
10 |
+
opentelemetry-instrumentation-requests
|
11 |
+
opentelemetry-exporter-otlp-proto-grpc
|
api/skypilot.yaml
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
service:
|
2 |
+
readiness_probe:
|
3 |
+
path: /docs
|
4 |
+
initial_delay_seconds: 300
|
5 |
+
timeout_seconds: 30
|
6 |
+
|
7 |
+
replica_policy:
|
8 |
+
min_replicas: 1
|
9 |
+
max_replicas: 50
|
10 |
+
target_qps_per_replica: 5
|
11 |
+
upscale_delay_seconds: 180
|
12 |
+
downscale_delay_seconds: 600
|
13 |
+
|
14 |
+
resources:
|
15 |
+
ports: 8000 # FastAPI default port
|
16 |
+
cpus: 16
|
17 |
+
memory: 64
|
18 |
+
disk_size: 100
|
19 |
+
use_spot: true
|
20 |
+
|
21 |
+
workdir: /app
|
22 |
+
|
23 |
+
setup: |
|
24 |
+
git clone https://github.com/kyegomez/swarms.git
|
25 |
+
cd swarms/api
|
26 |
+
pip install -r requirements.txt
|
27 |
+
pip install swarms
|
28 |
+
|
29 |
+
run: |
|
30 |
+
cd swarms/api
|
31 |
+
uvicorn main:app --host 0.0.0.0 --port 8000 --workers 4
|
32 |
+
|
33 |
+
# env:
|
34 |
+
# PYTHONPATH: /app/swarms
|
35 |
+
# LOG_LEVEL: "INFO"
|
36 |
+
# # MAX_WORKERS: "4"
|
37 |
+
|
api/test_api.py
ADDED
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import requests
|
2 |
+
import json
|
3 |
+
from time import sleep
|
4 |
+
|
5 |
+
BASE_URL = "http://0.0.0.0:8000/v1"
|
6 |
+
|
7 |
+
|
8 |
+
def make_request(method, endpoint, data=None):
|
9 |
+
"""Helper function to make requests with error handling"""
|
10 |
+
url = f"{BASE_URL}{endpoint}"
|
11 |
+
try:
|
12 |
+
if method == "GET":
|
13 |
+
response = requests.get(url)
|
14 |
+
elif method == "POST":
|
15 |
+
response = requests.post(url, json=data)
|
16 |
+
elif method == "DELETE":
|
17 |
+
response = requests.delete(url)
|
18 |
+
|
19 |
+
response.raise_for_status()
|
20 |
+
return response.json()
|
21 |
+
except requests.exceptions.RequestException as e:
|
22 |
+
print(
|
23 |
+
f"Error making {method} request to {endpoint}: {str(e)}"
|
24 |
+
)
|
25 |
+
if hasattr(e.response, "text"):
|
26 |
+
print(f"Response text: {e.response.text}")
|
27 |
+
return None
|
28 |
+
|
29 |
+
|
30 |
+
def create_agent():
|
31 |
+
"""Create a test agent"""
|
32 |
+
data = {
|
33 |
+
"agent_name": "test_agent",
|
34 |
+
"model_name": "gpt-4",
|
35 |
+
"system_prompt": "You are a helpful assistant",
|
36 |
+
"description": "Test agent",
|
37 |
+
"temperature": 0.7,
|
38 |
+
"max_loops": 1,
|
39 |
+
"tags": ["test"],
|
40 |
+
}
|
41 |
+
return make_request("POST", "/v1/agent", data)
|
42 |
+
|
43 |
+
|
44 |
+
def list_agents():
|
45 |
+
"""List all agents"""
|
46 |
+
return make_request("GET", "/v1/agents")
|
47 |
+
|
48 |
+
|
49 |
+
def test_completion(agent_id):
|
50 |
+
"""Test a completion with the agent"""
|
51 |
+
data = {
|
52 |
+
"prompt": "Say hello!",
|
53 |
+
"agent_id": agent_id,
|
54 |
+
"max_tokens": 100,
|
55 |
+
}
|
56 |
+
return make_request("POST", "/v1/agent/completions", data)
|
57 |
+
|
58 |
+
|
59 |
+
def get_agent_metrics(agent_id):
|
60 |
+
"""Get metrics for an agent"""
|
61 |
+
return make_request("GET", f"/v1/agent/{agent_id}/metrics")
|
62 |
+
|
63 |
+
|
64 |
+
def delete_agent(agent_id):
|
65 |
+
"""Delete an agent"""
|
66 |
+
return make_request("DELETE", f"/v1/agent/{agent_id}")
|
67 |
+
|
68 |
+
|
69 |
+
def run_tests():
|
70 |
+
print("Starting API tests...")
|
71 |
+
|
72 |
+
# Create an agent
|
73 |
+
print("\n1. Creating agent...")
|
74 |
+
agent_response = create_agent()
|
75 |
+
if not agent_response:
|
76 |
+
print("Failed to create agent")
|
77 |
+
return
|
78 |
+
|
79 |
+
agent_id = agent_response.get("agent_id")
|
80 |
+
print(f"Created agent with ID: {agent_id}")
|
81 |
+
|
82 |
+
# Give the server a moment to process
|
83 |
+
sleep(2)
|
84 |
+
|
85 |
+
# List agents
|
86 |
+
print("\n2. Listing agents...")
|
87 |
+
agents = list_agents()
|
88 |
+
print(f"Found {len(agents)} agents")
|
89 |
+
|
90 |
+
# Test completion
|
91 |
+
if agent_id:
|
92 |
+
print("\n3. Testing completion...")
|
93 |
+
completion = test_completion(agent_id)
|
94 |
+
if completion:
|
95 |
+
print(
|
96 |
+
f"Completion response: {completion.get('response')}"
|
97 |
+
)
|
98 |
+
|
99 |
+
print("\n4. Getting agent metrics...")
|
100 |
+
metrics = get_agent_metrics(agent_id)
|
101 |
+
if metrics:
|
102 |
+
print(f"Agent metrics: {json.dumps(metrics, indent=2)}")
|
103 |
+
|
104 |
+
# Clean up
|
105 |
+
# print("\n5. Cleaning up - deleting agent...")
|
106 |
+
# delete_result = delete_agent(agent_id)
|
107 |
+
# if delete_result:
|
108 |
+
# print("Successfully deleted agent")
|
109 |
+
|
110 |
+
|
111 |
+
if __name__ == "__main__":
|
112 |
+
run_tests()
|
docs/.readthedocs.yaml
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
version: 2
|
3 |
+
build:
|
4 |
+
os: ubuntu-22.04
|
5 |
+
tools:
|
6 |
+
python: "3.11"
|
7 |
+
mkdocs:
|
8 |
+
configuration: docs/mkdocs.yml
|
9 |
+
python:
|
10 |
+
install:
|
11 |
+
- requirements: docs/requirements.txt
|
docs/applications/azure_openai.md
ADDED
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Deploying Azure OpenAI in Production: A Comprehensive Guide
|
2 |
+
|
3 |
+
In today's fast-paced digital landscape, leveraging cutting-edge technologies has become essential for businesses to stay competitive and provide exceptional services to their customers. One such technology that has gained significant traction is Azure OpenAI, a powerful platform that allows developers to integrate advanced natural language processing (NLP) capabilities into their applications. Whether you're building a chatbot, a content generation system, or any other AI-powered solution, Azure OpenAI offers a robust and scalable solution for production-grade deployment.
|
4 |
+
|
5 |
+
In this comprehensive guide, we'll walk through the process of setting up and deploying Azure OpenAI in a production environment. We'll dive deep into the code, provide clear explanations, and share best practices to ensure a smooth and successful implementation.
|
6 |
+
|
7 |
+
## Prerequisites:
|
8 |
+
Before we begin, it's essential to have the following prerequisites in place:
|
9 |
+
|
10 |
+
1. **Python**: You'll need to have Python installed on your system. This guide assumes you're using Python 3.6 or later.
|
11 |
+
2. **Azure Subscription**: You'll need an active Azure subscription to access Azure OpenAI services.
|
12 |
+
3. **Azure OpenAI Resource**: Create an Azure OpenAI resource in your Azure subscription.
|
13 |
+
4. **Python Packages**: Install the required Python packages, including `python-dotenv` and `swarms`.
|
14 |
+
|
15 |
+
## Setting up the Environment:
|
16 |
+
To kick things off, we'll set up our development environment and install the necessary dependencies.
|
17 |
+
|
18 |
+
1. **Create a Virtual Environment**: It's a best practice to create a virtual environment to isolate your project dependencies from the rest of your system. You can create a virtual environment using `venv` or any other virtual environment management tool of your choice.
|
19 |
+
|
20 |
+
```
|
21 |
+
python -m venv myenv
|
22 |
+
```
|
23 |
+
|
24 |
+
2. **Activate the Virtual Environment**: Activate the virtual environment to ensure that any packages you install are isolated within the environment.
|
25 |
+
|
26 |
+
```
|
27 |
+
source myenv/bin/activate # On Windows, use `myenv\Scripts\activate`
|
28 |
+
```
|
29 |
+
|
30 |
+
3. **Install Required Packages**: Install the `python-dotenv` and `swarms` packages using pip.
|
31 |
+
|
32 |
+
```
|
33 |
+
pip install python-dotenv swarms
|
34 |
+
```
|
35 |
+
|
36 |
+
4. **Create a `.env` File**: In the root directory of your project, create a new file called `.env`. This file will store your Azure OpenAI credentials and configuration settings.
|
37 |
+
|
38 |
+
```
|
39 |
+
AZURE_OPENAI_ENDPOINT=<your_azure_openai_endpoint>
|
40 |
+
AZURE_OPENAI_DEPLOYMENT=<your_azure_openai_deployment_name>
|
41 |
+
OPENAI_API_VERSION=<your_openai_api_version>
|
42 |
+
AZURE_OPENAI_API_KEY=<your_azure_openai_api_key>
|
43 |
+
AZURE_OPENAI_AD_TOKEN=<your_azure_openai_ad_token>
|
44 |
+
```
|
45 |
+
|
46 |
+
Replace the placeholders with your actual Azure OpenAI credentials and configuration settings.
|
47 |
+
|
48 |
+
## Connecting to Azure OpenAI:
|
49 |
+
Now that we've set up our environment, let's dive into the code that connects to Azure OpenAI and interacts with the language model.
|
50 |
+
|
51 |
+
```python
|
52 |
+
import os
|
53 |
+
from dotenv import load_dotenv
|
54 |
+
from swarms import AzureOpenAI
|
55 |
+
|
56 |
+
# Load the environment variables
|
57 |
+
load_dotenv()
|
58 |
+
|
59 |
+
# Create an instance of the AzureOpenAI class
|
60 |
+
model = AzureOpenAI(
|
61 |
+
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
|
62 |
+
deployment_name=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
|
63 |
+
openai_api_version=os.getenv("OPENAI_API_VERSION"),
|
64 |
+
openai_api_key=os.getenv("AZURE_OPENAI_API_KEY"),
|
65 |
+
azure_ad_token=os.getenv("AZURE_OPENAI_AD_TOKEN")
|
66 |
+
)
|
67 |
+
```
|
68 |
+
|
69 |
+
## Let's break down this code:
|
70 |
+
|
71 |
+
1. **Import Statements**: We import the necessary modules, including `os` for interacting with the operating system, `load_dotenv` from `python-dotenv` to load environment variables, and `AzureOpenAI` from `swarms` to interact with the Azure OpenAI service.
|
72 |
+
|
73 |
+
2. **Load Environment Variables**: We use `load_dotenv()` to load the environment variables stored in the `.env` file we created earlier.
|
74 |
+
|
75 |
+
3. **Create AzureOpenAI Instance**: We create an instance of the `AzureOpenAI` class by passing in the required configuration parameters:
|
76 |
+
- `azure_endpoint`: The endpoint URL for your Azure OpenAI resource.
|
77 |
+
- `deployment_name`: The name of the deployment you want to use.
|
78 |
+
- `openai_api_version`: The version of the OpenAI API you want to use.
|
79 |
+
- `openai_api_key`: Your Azure OpenAI API key, which authenticates your requests.
|
80 |
+
- `azure_ad_token`: An optional Azure Active Directory (AAD) token for additional security.
|
81 |
+
|
82 |
+
Querying the Language Model:
|
83 |
+
With our connection to Azure OpenAI established, we can now query the language model and receive responses.
|
84 |
+
|
85 |
+
```python
|
86 |
+
# Define the prompt
|
87 |
+
prompt = "Analyze this load document and assess it for any risks and create a table in markdwon format."
|
88 |
+
|
89 |
+
# Generate a response
|
90 |
+
response = model(prompt)
|
91 |
+
print(response)
|
92 |
+
```
|
93 |
+
|
94 |
+
## Here's what's happening:
|
95 |
+
|
96 |
+
1. **Define the Prompt**: We define a prompt, which is the input text or question we want to feed into the language model.
|
97 |
+
|
98 |
+
2. **Generate a Response**: We call the `model` instance with the `prompt` as an argument. This triggers the Azure OpenAI service to process the prompt and generate a response.
|
99 |
+
|
100 |
+
3. **Print the Response**: Finally, we print the response received from the language model.
|
101 |
+
|
102 |
+
Running the Code:
|
103 |
+
To run the code, save it in a Python file (e.g., `main.py`) and execute it from the command line:
|
104 |
+
|
105 |
+
```
|
106 |
+
python main.py
|
107 |
+
```
|
108 |
+
|
109 |
+
## Best Practices for Production Deployment:
|
110 |
+
While the provided code serves as a basic example, there are several best practices to consider when deploying Azure OpenAI in a production environment:
|
111 |
+
|
112 |
+
1. **Secure Credentials Management**: Instead of storing sensitive credentials like API keys in your codebase, consider using secure storage solutions like Azure Key Vault or environment variables managed by your cloud provider.
|
113 |
+
|
114 |
+
2. **Error Handling and Retries**: Implement robust error handling and retry mechanisms to handle potential failures or rate-limiting scenarios.
|
115 |
+
|
116 |
+
3. **Logging and Monitoring**: Implement comprehensive logging and monitoring strategies to track application performance, identify issues, and gather insights for optimization.
|
117 |
+
|
118 |
+
4. **Scalability and Load Testing**: Conduct load testing to ensure your application can handle anticipated traffic volumes and scale appropriately based on demand.
|
119 |
+
|
120 |
+
5. **Caching and Optimization**: Explore caching strategies and performance optimizations to improve response times and reduce the load on the Azure OpenAI service.
|
121 |
+
|
122 |
+
6. **Integration with Other Services**: Depending on your use case, you may need to integrate Azure OpenAI with other Azure services or third-party tools for tasks like data processing, storage, or analysis.
|
123 |
+
|
124 |
+
7. **Compliance and Security**: Ensure your application adheres to relevant compliance standards and security best practices, especially when handling sensitive data.
|
125 |
+
|
126 |
+
## Conclusion:
|
127 |
+
Azure OpenAI is a powerful platform that enables developers to integrate advanced natural language processing capabilities into their applications. By following the steps outlined in this guide, you can set up a production-ready environment for deploying Azure OpenAI and start leveraging its capabilities in your projects.
|
128 |
+
|
129 |
+
Remember, this guide serves as a starting point, and there are numerous additional features and capabilities within Azure OpenAI that you can explore to enhance your applications further. As with any production deployment, it's crucial to follow best practices, conduct thorough testing, and implement robust monitoring and security measures.
|
130 |
+
|
131 |
+
With the right approach and careful planning, you can successfully deploy Azure OpenAI in a production environment and unlock the power of cutting-edge language models to drive innovation and provide exceptional experiences for your users.
|
docs/applications/blog.md
ADDED
@@ -0,0 +1,468 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# The Future of Manufacturing: Leveraging Autonomous LLM Agents for Cost Reduction and Revenue Growth
|
2 |
+
|
3 |
+
## Table of Contents
|
4 |
+
|
5 |
+
1. [Introduction](#introduction)
|
6 |
+
2. [Understanding Autonomous LLM Agents](#understanding-autonomous-llm-agents)
|
7 |
+
3. [RAG Embedding Databases: The Knowledge Foundation](#rag-embedding-databases)
|
8 |
+
4. [Function Calling and External Tools: Enhancing Capabilities](#function-calling-and-external-tools)
|
9 |
+
5. [Cost Reduction Strategies](#cost-reduction-strategies)
|
10 |
+
5.1. [Optimizing Supply Chain Management](#optimizing-supply-chain-management)
|
11 |
+
5.2. [Enhancing Quality Control](#enhancing-quality-control)
|
12 |
+
5.3. [Streamlining Maintenance and Repairs](#streamlining-maintenance-and-repairs)
|
13 |
+
5.4. [Improving Energy Efficiency](#improving-energy-efficiency)
|
14 |
+
6. [Revenue Growth Opportunities](#revenue-growth-opportunities)
|
15 |
+
6.1. [Product Innovation and Development](#product-innovation-and-development)
|
16 |
+
6.2. [Personalized Customer Experiences](#personalized-customer-experiences)
|
17 |
+
6.3. [Market Analysis and Trend Prediction](#market-analysis-and-trend-prediction)
|
18 |
+
6.4. [Optimizing Pricing Strategies](#optimizing-pricing-strategies)
|
19 |
+
7. [Implementation Strategies](#implementation-strategies)
|
20 |
+
8. [Overcoming Challenges and Risks](#overcoming-challenges-and-risks)
|
21 |
+
9. [Case Studies](#case-studies)
|
22 |
+
10. [Future Outlook](#future-outlook)
|
23 |
+
11. [Conclusion](#conclusion)
|
24 |
+
|
25 |
+
## 1. Introduction <a name="introduction"></a>
|
26 |
+
|
27 |
+
In today's rapidly evolving manufacturing landscape, executives and CEOs face unprecedented challenges and opportunities. The key to maintaining a competitive edge lies in embracing cutting-edge technologies that can revolutionize operations, reduce costs, and drive revenue growth. One such transformative technology is the integration of autonomous Large Language Model (LLM) agents equipped with Retrieval-Augmented Generation (RAG) embedding databases, function calling capabilities, and access to external tools.
|
28 |
+
|
29 |
+
This comprehensive blog post aims to explore how these advanced AI systems can be leveraged to address the most pressing issues in manufacturing enterprises. We will delve into the intricacies of these technologies, provide concrete examples of their applications, and offer insights into implementation strategies. By the end of this article, you will have a clear understanding of how autonomous LLM agents can become a cornerstone of your manufacturing business's digital transformation journey.
|
30 |
+
|
31 |
+
## 2. Understanding Autonomous LLM Agents <a name="understanding-autonomous-llm-agents"></a>
|
32 |
+
|
33 |
+
Autonomous LLM agents represent the cutting edge of artificial intelligence in the manufacturing sector. These sophisticated systems are built upon large language models, which are neural networks trained on vast amounts of text data. What sets them apart is their ability to operate autonomously, making decisions and taking actions with minimal human intervention.
|
34 |
+
|
35 |
+
Key features of autonomous LLM agents include:
|
36 |
+
|
37 |
+
1. **Natural Language Processing (NLP)**: They can understand and generate human-like text, enabling seamless communication with employees across all levels of the organization.
|
38 |
+
|
39 |
+
2. **Contextual Understanding**: These agents can grasp complex scenarios and nuanced information, making them ideal for handling intricate manufacturing processes.
|
40 |
+
|
41 |
+
3. **Adaptive Learning**: Through continuous interaction and feedback, they can improve their performance over time, becoming more efficient and accurate.
|
42 |
+
|
43 |
+
4. **Multi-modal Input Processing**: Advanced agents can process not only text but also images, audio, and sensor data, providing a holistic view of manufacturing operations.
|
44 |
+
|
45 |
+
5. **Task Automation**: They can automate a wide range of tasks, from data analysis to decision-making, freeing up human resources for more strategic activities.
|
46 |
+
|
47 |
+
The integration of autonomous LLM agents in manufacturing environments opens up new possibilities for optimization, innovation, and growth. As we explore their applications throughout this blog, it's crucial to understand that these agents are not meant to replace human workers but to augment their capabilities and drive overall productivity.
|
48 |
+
|
49 |
+
## 3. RAG Embedding Databases: The Knowledge Foundation <a name="rag-embedding-databases"></a>
|
50 |
+
|
51 |
+
At the heart of effective autonomous LLM agents lies the Retrieval-Augmented Generation (RAG) embedding database. This technology serves as the knowledge foundation, enabling agents to access and utilize vast amounts of relevant information quickly and accurately.
|
52 |
+
|
53 |
+
RAG embedding databases work by:
|
54 |
+
|
55 |
+
1. **Vectorizing Information**: Converting textual data into high-dimensional vectors that capture semantic meaning.
|
56 |
+
|
57 |
+
2. **Efficient Storage**: Organizing these vectors in a way that allows for rapid retrieval of relevant information.
|
58 |
+
|
59 |
+
3. **Contextual Retrieval**: Enabling the agent to pull relevant information based on the current context or query.
|
60 |
+
|
61 |
+
4. **Dynamic Updates**: Allowing for continuous updates to the knowledge base, ensuring the agent always has access to the most current information.
|
62 |
+
|
63 |
+
In the manufacturing context, RAG embedding databases can store a wealth of information, including:
|
64 |
+
|
65 |
+
- Technical specifications of machinery and products
|
66 |
+
- Historical production data and performance metrics
|
67 |
+
- Quality control guidelines and standards
|
68 |
+
- Supplier information and supply chain data
|
69 |
+
- Market trends and customer feedback
|
70 |
+
|
71 |
+
By leveraging RAG embedding databases, autonomous LLM agents can make informed decisions based on a comprehensive understanding of the manufacturing ecosystem. This leads to more accurate predictions, better problem-solving capabilities, and the ability to generate innovative solutions.
|
72 |
+
|
73 |
+
For example, when faced with a production bottleneck, an agent can quickly retrieve relevant historical data, equipment specifications, and best practices to propose an optimal solution. This rapid access to contextual information significantly reduces decision-making time and improves the quality of outcomes.
|
74 |
+
|
75 |
+
## 4. Function Calling and External Tools: Enhancing Capabilities <a name="function-calling-and-external-tools"></a>
|
76 |
+
|
77 |
+
The true power of autonomous LLM agents in manufacturing environments is realized through their ability to interact with external systems and tools. This is achieved through function calling and integration with specialized external tools.
|
78 |
+
|
79 |
+
Function calling allows the agent to:
|
80 |
+
|
81 |
+
1. **Execute Specific Tasks**: Trigger predefined functions to perform complex operations or calculations.
|
82 |
+
|
83 |
+
2. **Interact with Databases**: Query and update various databases within the manufacturing ecosystem.
|
84 |
+
|
85 |
+
3. **Control Equipment**: Send commands to machinery or robotic systems on the production floor.
|
86 |
+
|
87 |
+
4. **Generate Reports**: Automatically compile and format data into meaningful reports for different stakeholders.
|
88 |
+
|
89 |
+
External tools that can be integrated include:
|
90 |
+
|
91 |
+
- **Predictive Maintenance Software**: To schedule and optimize equipment maintenance.
|
92 |
+
- **Supply Chain Management Systems**: For real-time tracking and optimization of inventory and logistics.
|
93 |
+
- **Quality Control Systems**: To monitor and analyze product quality metrics.
|
94 |
+
- **Energy Management Tools**: For monitoring and optimizing energy consumption across the facility.
|
95 |
+
- **Customer Relationship Management (CRM) Software**: To analyze customer data and improve service.
|
96 |
+
|
97 |
+
By combining the cognitive abilities of LLM agents with the specialized functionalities of external tools, manufacturing enterprises can create a powerful ecosystem that drives efficiency and innovation.
|
98 |
+
|
99 |
+
For instance, an autonomous agent could:
|
100 |
+
|
101 |
+
1. Detect an anomaly in production quality through data analysis.
|
102 |
+
2. Use function calling to query the maintenance database for equipment history.
|
103 |
+
3. Leverage an external predictive maintenance tool to assess the risk of equipment failure.
|
104 |
+
4. Automatically schedule maintenance and adjust production schedules to minimize downtime.
|
105 |
+
5. Generate a comprehensive report for management, detailing the issue, actions taken, and impact on production.
|
106 |
+
|
107 |
+
This level of integration and automation can lead to significant improvements in operational efficiency, cost reduction, and overall productivity.
|
108 |
+
|
109 |
+
## 5. Cost Reduction Strategies <a name="cost-reduction-strategies"></a>
|
110 |
+
|
111 |
+
One of the primary benefits of implementing autonomous LLM agents in manufacturing is the potential for substantial cost reductions across various aspects of operations. Let's explore some key areas where these agents can drive down expenses:
|
112 |
+
|
113 |
+
### 5.1. Optimizing Supply Chain Management <a name="optimizing-supply-chain-management"></a>
|
114 |
+
|
115 |
+
Autonomous LLM agents can revolutionize supply chain management by:
|
116 |
+
|
117 |
+
- **Predictive Inventory Management**: Analyzing historical data, market trends, and production schedules to optimize inventory levels, reducing carrying costs and minimizing stockouts.
|
118 |
+
|
119 |
+
- **Supplier Selection and Negotiation**: Evaluating supplier performance, market conditions, and contract terms to recommend the most cost-effective suppliers and negotiate better deals.
|
120 |
+
|
121 |
+
- **Logistics Optimization**: Analyzing transportation routes, warehouse locations, and delivery schedules to minimize logistics costs and improve delivery times.
|
122 |
+
|
123 |
+
Example: A large automotive manufacturer implemented an autonomous LLM agent to optimize its global supply chain. The agent analyzed data from multiple sources, including production schedules, supplier performance metrics, and global shipping trends. By optimizing inventory levels and renegotiating supplier contracts, the company reduced supply chain costs by 15% in the first year, resulting in savings of over $100 million.
|
124 |
+
|
125 |
+
### 5.2. Enhancing Quality Control <a name="enhancing-quality-control"></a>
|
126 |
+
|
127 |
+
Quality control is a critical aspect of manufacturing that directly impacts costs. Autonomous LLM agents can significantly improve quality control processes by:
|
128 |
+
|
129 |
+
- **Real-time Defect Detection**: Integrating with computer vision systems to identify and classify defects in real-time, reducing waste and rework.
|
130 |
+
|
131 |
+
- **Root Cause Analysis**: Analyzing production data to identify the root causes of quality issues and recommending corrective actions.
|
132 |
+
|
133 |
+
- **Predictive Quality Management**: Leveraging historical data and machine learning models to predict potential quality issues before they occur.
|
134 |
+
|
135 |
+
Example: A semiconductor manufacturer deployed an autonomous LLM agent to enhance its quality control processes. The agent analyzed data from multiple sensors on the production line, historical quality records, and equipment maintenance logs. By identifying subtle patterns that led to defects, the agent helped reduce scrap rates by 30% and improved overall yield by 5%, resulting in annual savings of $50 million.
|
136 |
+
|
137 |
+
### 5.3. Streamlining Maintenance and Repairs <a name="streamlining-maintenance-and-repairs"></a>
|
138 |
+
|
139 |
+
Effective maintenance is crucial for minimizing downtime and extending the lifespan of expensive manufacturing equipment. Autonomous LLM agents can optimize maintenance processes by:
|
140 |
+
|
141 |
+
- **Predictive Maintenance**: Analyzing equipment sensor data, maintenance history, and production schedules to predict when maintenance is needed, reducing unplanned downtime.
|
142 |
+
|
143 |
+
- **Maintenance Scheduling Optimization**: Balancing maintenance needs with production schedules to minimize disruptions and maximize equipment availability.
|
144 |
+
|
145 |
+
- **Repair Knowledge Management**: Creating and maintaining a comprehensive knowledge base of repair procedures, making it easier for technicians to quickly address issues.
|
146 |
+
|
147 |
+
Example: A paper mill implemented an autonomous LLM agent to manage its maintenance operations. The agent analyzed vibration data from critical equipment, historical maintenance records, and production schedules. By implementing a predictive maintenance strategy, the mill reduced unplanned downtime by 40% and extended the lifespan of key equipment by 25%, resulting in annual savings of $15 million in maintenance costs and lost production time.
|
148 |
+
|
149 |
+
### 5.4. Improving Energy Efficiency <a name="improving-energy-efficiency"></a>
|
150 |
+
|
151 |
+
Energy consumption is a significant cost factor in manufacturing. Autonomous LLM agents can help reduce energy costs by:
|
152 |
+
|
153 |
+
- **Real-time Energy Monitoring**: Analyzing energy consumption data across the facility to identify inefficiencies and anomalies.
|
154 |
+
|
155 |
+
- **Process Optimization for Energy Efficiency**: Recommending changes to production processes to reduce energy consumption without impacting output.
|
156 |
+
|
157 |
+
- **Demand Response Management**: Integrating with smart grid systems to optimize energy usage based on variable electricity prices and demand.
|
158 |
+
|
159 |
+
Example: A large chemical manufacturing plant deployed an autonomous LLM agent to optimize its energy consumption. The agent analyzed data from thousands of sensors across the facility, weather forecasts, and electricity price fluctuations. By optimizing process parameters and scheduling energy-intensive operations during off-peak hours, the plant reduced its energy costs by 18%, saving $10 million annually.
|
160 |
+
|
161 |
+
## 6. Revenue Growth Opportunities <a name="revenue-growth-opportunities"></a>
|
162 |
+
|
163 |
+
While cost reduction is crucial, autonomous LLM agents also present significant opportunities for revenue growth in manufacturing enterprises. Let's explore how these advanced AI systems can drive top-line growth:
|
164 |
+
|
165 |
+
### 6.1. Product Innovation and Development <a name="product-innovation-and-development"></a>
|
166 |
+
|
167 |
+
Autonomous LLM agents can accelerate and enhance the product innovation process by:
|
168 |
+
|
169 |
+
- **Market Trend Analysis**: Analyzing vast amounts of market data, customer feedback, and industry reports to identify emerging trends and unmet needs.
|
170 |
+
|
171 |
+
- **Design Optimization**: Leveraging generative design techniques and historical performance data to suggest optimal product designs that balance functionality, manufacturability, and cost.
|
172 |
+
|
173 |
+
- **Rapid Prototyping Assistance**: Guiding engineers through the prototyping process, suggesting materials and manufacturing techniques based on design requirements and cost constraints.
|
174 |
+
|
175 |
+
Example: A consumer electronics manufacturer utilized an autonomous LLM agent to enhance its product development process. The agent analyzed social media trends, customer support tickets, and competitor product features to identify key areas for innovation. By suggesting novel features and optimizing designs for manufacturability, the company reduced time-to-market for new products by 30% and increased the success rate of new product launches by 25%, resulting in a 15% increase in annual revenue.
|
176 |
+
|
177 |
+
### 6.2. Personalized Customer Experiences <a name="personalized-customer-experiences"></a>
|
178 |
+
|
179 |
+
In the age of mass customization, providing personalized experiences can significantly boost customer satisfaction and revenue. Autonomous LLM agents can facilitate this by:
|
180 |
+
|
181 |
+
- **Customer Preference Analysis**: Analyzing historical purchase data, customer interactions, and market trends to predict individual customer preferences.
|
182 |
+
|
183 |
+
- **Dynamic Product Configuration**: Enabling real-time product customization based on customer inputs and preferences, while ensuring manufacturability.
|
184 |
+
|
185 |
+
- **Personalized Marketing and Sales Support**: Generating tailored marketing content and sales recommendations for each customer or market segment.
|
186 |
+
|
187 |
+
Example: A high-end furniture manufacturer implemented an autonomous LLM agent to power its online customization platform. The agent analyzed customer behavior, design trends, and production capabilities to offer personalized product recommendations and customization options. This led to a 40% increase in online sales and a 20% increase in average order value, driving significant revenue growth.
|
188 |
+
|
189 |
+
### 6.3. Market Analysis and Trend Prediction <a name="market-analysis-and-trend-prediction"></a>
|
190 |
+
|
191 |
+
Staying ahead of market trends is crucial for maintaining a competitive edge. Autonomous LLM agents can provide valuable insights by:
|
192 |
+
|
193 |
+
- **Competitive Intelligence**: Analyzing competitor activities, product launches, and market positioning to identify threats and opportunities.
|
194 |
+
|
195 |
+
- **Demand Forecasting**: Combining historical sales data, economic indicators, and market trends to predict future demand more accurately.
|
196 |
+
|
197 |
+
- **Emerging Market Identification**: Analyzing global economic data, demographic trends, and industry reports to identify promising new markets for expansion.
|
198 |
+
|
199 |
+
Example: A global automotive parts manufacturer employed an autonomous LLM agent to enhance its market intelligence capabilities. The agent analyzed data from industry reports, social media, patent filings, and economic indicators to predict the growth of electric vehicle adoption in different regions. This insight allowed the company to strategically invest in EV component manufacturing, resulting in a 30% year-over-year growth in this high-margin segment.
|
200 |
+
|
201 |
+
### 6.4. Optimizing Pricing Strategies <a name="optimizing-pricing-strategies"></a>
|
202 |
+
|
203 |
+
Pricing is a critical lever for revenue growth. Autonomous LLM agents can optimize pricing strategies by:
|
204 |
+
|
205 |
+
- **Dynamic Pricing Models**: Analyzing market conditions, competitor pricing, and demand fluctuations to suggest optimal pricing in real-time.
|
206 |
+
|
207 |
+
- **Value-based Pricing Analysis**: Assessing customer perceived value through sentiment analysis and willingness-to-pay studies to maximize revenue.
|
208 |
+
|
209 |
+
- **Bundle and Discount Optimization**: Recommending product bundles and discount structures that maximize overall revenue and profitability.
|
210 |
+
|
211 |
+
Example: A industrial equipment manufacturer implemented an autonomous LLM agent to optimize its pricing strategy. The agent analyzed historical sales data, competitor pricing, economic indicators, and customer sentiment to recommend dynamic pricing models for different product lines and markets. This resulted in a 10% increase in profit margins and a 7% boost in overall revenue within the first year of implementation.
|
212 |
+
|
213 |
+
## 7. Implementation Strategies <a name="implementation-strategies"></a>
|
214 |
+
|
215 |
+
Successfully implementing autonomous LLM agents in a manufacturing environment requires a strategic approach. Here are key steps and considerations for executives and CEOs:
|
216 |
+
|
217 |
+
1. **Start with a Clear Vision and Objectives**:
|
218 |
+
- Define specific goals for cost reduction and revenue growth.
|
219 |
+
- Identify key performance indicators (KPIs) to measure success.
|
220 |
+
|
221 |
+
2. **Conduct a Comprehensive Readiness Assessment**:
|
222 |
+
- Evaluate existing IT infrastructure and data management systems.
|
223 |
+
- Assess the quality and accessibility of historical data.
|
224 |
+
- Identify potential integration points with existing systems and processes.
|
225 |
+
|
226 |
+
3. **Build a Cross-functional Implementation Team**:
|
227 |
+
- Include representatives from IT, operations, engineering, and business strategy.
|
228 |
+
- Consider partnering with external AI and manufacturing technology experts.
|
229 |
+
|
230 |
+
4. **Develop a Phased Implementation Plan**:
|
231 |
+
- Start with pilot projects in specific areas (e.g., predictive maintenance or supply chain optimization).
|
232 |
+
- Scale successful pilots across the organization.
|
233 |
+
|
234 |
+
5. **Invest in Data Infrastructure and Quality**:
|
235 |
+
- Ensure robust data collection and storage systems are in place.
|
236 |
+
- Implement data cleaning and standardization processes.
|
237 |
+
|
238 |
+
|
239 |
+
|
240 |
+
6. **Choose the Right LLM and RAG Technologies**:
|
241 |
+
- Evaluate different LLM options based on performance, cost, and specific manufacturing requirements.
|
242 |
+
- Select RAG embedding databases that can efficiently handle the scale and complexity of manufacturing data.
|
243 |
+
|
244 |
+
7. **Develop a Robust Integration Strategy**:
|
245 |
+
- Plan for seamless integration with existing ERP, MES, and other critical systems.
|
246 |
+
- Ensure proper API development and management for connecting with external tools and databases.
|
247 |
+
|
248 |
+
8. **Prioritize Security and Compliance**:
|
249 |
+
- Implement strong data encryption and access control measures.
|
250 |
+
- Ensure compliance with industry regulations and data privacy laws.
|
251 |
+
|
252 |
+
9. **Invest in Change Management and Training**:
|
253 |
+
- Develop comprehensive training programs for employees at all levels.
|
254 |
+
- Communicate the benefits and address concerns about AI implementation.
|
255 |
+
|
256 |
+
10. **Establish Governance and Oversight**:
|
257 |
+
- Create a governance structure to oversee the use and development of AI systems.
|
258 |
+
- Implement ethical guidelines for AI decision-making.
|
259 |
+
|
260 |
+
11. **Plan for Continuous Improvement**:
|
261 |
+
- Set up feedback loops to continuously refine and improve the AI systems.
|
262 |
+
- Stay updated on advancements in LLM and RAG technologies.
|
263 |
+
|
264 |
+
Example: A leading automotive manufacturer implemented autonomous LLM agents across its global operations using a phased approach. They started with a pilot project in predictive maintenance at a single plant, which reduced downtime by 25%. Building on this success, they expanded to supply chain optimization and quality control. Within three years, the company had deployed AI agents across all major operations, resulting in a 12% reduction in overall production costs and a 9% increase in productivity.
|
265 |
+
|
266 |
+
## 8. Overcoming Challenges and Risks <a name="overcoming-challenges-and-risks"></a>
|
267 |
+
|
268 |
+
While the benefits of autonomous LLM agents in manufacturing are substantial, there are several challenges and risks that executives must address:
|
269 |
+
|
270 |
+
### Data Quality and Availability
|
271 |
+
|
272 |
+
**Challenge**: Manufacturing environments often have siloed, inconsistent, or incomplete data, which can hinder the effectiveness of AI systems.
|
273 |
+
|
274 |
+
**Solution**:
|
275 |
+
- Invest in data infrastructure and standardization across the organization.
|
276 |
+
- Implement data governance policies to ensure consistent data collection and management.
|
277 |
+
- Use data augmentation techniques to address gaps in historical data.
|
278 |
+
|
279 |
+
### Integration with Legacy Systems
|
280 |
+
|
281 |
+
**Challenge**: Many manufacturing facilities rely on legacy systems that may not easily integrate with modern AI technologies.
|
282 |
+
|
283 |
+
**Solution**:
|
284 |
+
- Develop custom APIs and middleware to facilitate communication between legacy systems and AI agents.
|
285 |
+
- Consider a gradual modernization strategy, replacing legacy systems over time.
|
286 |
+
- Use edge computing devices to bridge the gap between old equipment and new AI systems.
|
287 |
+
|
288 |
+
### Workforce Adaptation and Resistance
|
289 |
+
|
290 |
+
**Challenge**: Employees may resist AI implementation due to fear of job displacement or lack of understanding.
|
291 |
+
|
292 |
+
**Solution**:
|
293 |
+
- Emphasize that AI is a tool to augment human capabilities, not replace workers.
|
294 |
+
- Provide comprehensive training programs to upskill employees.
|
295 |
+
- Involve workers in the AI implementation process to gain buy-in and valuable insights.
|
296 |
+
|
297 |
+
### Ethical Considerations and Bias
|
298 |
+
|
299 |
+
**Challenge**: AI systems may inadvertently perpetuate biases present in historical data or decision-making processes.
|
300 |
+
|
301 |
+
**Solution**:
|
302 |
+
- Implement rigorous testing for bias in AI models and decisions.
|
303 |
+
- Establish an ethics committee to oversee AI implementations.
|
304 |
+
- Regularly audit AI systems for fairness and unintended consequences.
|
305 |
+
|
306 |
+
### Security and Intellectual Property Protection
|
307 |
+
|
308 |
+
**Challenge**: AI systems may be vulnerable to cyber attacks or could potentially expose sensitive manufacturing processes.
|
309 |
+
|
310 |
+
**Solution**:
|
311 |
+
- Implement robust cybersecurity measures, including encryption and access controls.
|
312 |
+
- Develop clear policies on data handling and AI model ownership.
|
313 |
+
- Regularly conduct security audits and penetration testing.
|
314 |
+
|
315 |
+
Example: A pharmaceutical manufacturer faced challenges integrating AI agents with its highly regulated production processes. They addressed this by creating a cross-functional team of IT specialists, process engineers, and compliance officers. This team developed a custom integration layer that allowed AI agents to interact with existing systems while maintaining regulatory compliance. They also implemented a rigorous change management process, which included extensive training and a phased rollout. As a result, they successfully deployed AI agents that optimized production scheduling and quality control, leading to a 15% increase in throughput and a 30% reduction in quality-related issues.
|
316 |
+
|
317 |
+
## 9. Case Studies <a name="case-studies"></a>
|
318 |
+
|
319 |
+
To illustrate the transformative potential of autonomous LLM agents in manufacturing, let's examine several real-world case studies:
|
320 |
+
|
321 |
+
### Case Study 1: Global Electronics Manufacturer
|
322 |
+
|
323 |
+
**Challenge**: A leading electronics manufacturer was struggling with supply chain disruptions and rising production costs.
|
324 |
+
|
325 |
+
**Solution**: They implemented an autonomous LLM agent integrated with their supply chain management system and production planning tools.
|
326 |
+
|
327 |
+
**Results**:
|
328 |
+
- 22% reduction in inventory carrying costs
|
329 |
+
- 18% improvement in on-time deliveries
|
330 |
+
- 15% decrease in production lead times
|
331 |
+
- $200 million annual cost savings
|
332 |
+
|
333 |
+
**Key Factors for Success**:
|
334 |
+
- Comprehensive integration with existing systems
|
335 |
+
- Real-time data processing capabilities
|
336 |
+
- Continuous learning and optimization algorithms
|
337 |
+
|
338 |
+
### Case Study 2: Automotive Parts Supplier
|
339 |
+
|
340 |
+
**Challenge**: An automotive parts supplier needed to improve quality control and reduce warranty claims.
|
341 |
+
|
342 |
+
**Solution**: They deployed an AI-powered quality control system using computer vision and an autonomous LLM agent for defect analysis and prediction.
|
343 |
+
|
344 |
+
**Results**:
|
345 |
+
- 40% reduction in defect rates
|
346 |
+
- 60% decrease in warranty claims
|
347 |
+
- 25% improvement in overall equipment effectiveness (OEE)
|
348 |
+
- $75 million annual savings in quality-related costs
|
349 |
+
|
350 |
+
**Key Factors for Success**:
|
351 |
+
- High-quality image data collection system
|
352 |
+
- Integration of domain expertise into the AI model
|
353 |
+
- Continuous feedback loop for model improvement
|
354 |
+
|
355 |
+
### Case Study 3: Food and Beverage Manufacturer
|
356 |
+
|
357 |
+
**Challenge**: A large food and beverage manufacturer wanted to optimize its energy consumption and reduce waste in its production processes.
|
358 |
+
|
359 |
+
**Solution**: They implemented an autonomous LLM agent that integrated with their energy management systems and production equipment.
|
360 |
+
|
361 |
+
**Results**:
|
362 |
+
- 20% reduction in energy consumption
|
363 |
+
- 30% decrease in production waste
|
364 |
+
- 12% increase in overall production efficiency
|
365 |
+
- $50 million annual cost savings
|
366 |
+
- Significant progress towards sustainability goals
|
367 |
+
|
368 |
+
**Key Factors for Success**:
|
369 |
+
- Comprehensive sensor network for real-time data collection
|
370 |
+
- Integration with smart grid systems for dynamic energy management
|
371 |
+
- Collaboration with process engineers to refine AI recommendations
|
372 |
+
|
373 |
+
### Case Study 4: Aerospace Component Manufacturer
|
374 |
+
|
375 |
+
**Challenge**: An aerospace component manufacturer needed to accelerate product development and improve first-time-right rates for new designs.
|
376 |
+
|
377 |
+
**Solution**: They implemented an autonomous LLM agent to assist in the design process, leveraging historical data, simulation results, and industry standards.
|
378 |
+
|
379 |
+
**Results**:
|
380 |
+
- 35% reduction in design cycle time
|
381 |
+
- 50% improvement in first-time-right rates for new designs
|
382 |
+
- 20% increase in successful patent applications
|
383 |
+
- $100 million increase in annual revenue from new products
|
384 |
+
|
385 |
+
**Key Factors for Success**:
|
386 |
+
- Integration of CAD systems with the AI agent
|
387 |
+
- Incorporation of aerospace industry standards and regulations into the AI knowledge base
|
388 |
+
- Collaborative approach between AI and human engineers
|
389 |
+
|
390 |
+
These case studies demonstrate the wide-ranging benefits of autonomous LLM agents across various manufacturing sectors. The key takeaway is that successful implementation requires a holistic approach, combining technology integration, process redesign, and a focus on continuous improvement.
|
391 |
+
|
392 |
+
## 10. Future Outlook <a name="future-outlook"></a>
|
393 |
+
|
394 |
+
As we look to the future of manufacturing, the role of autonomous LLM agents is set to become even more critical. Here are some key trends and developments that executives should keep on their radar:
|
395 |
+
|
396 |
+
### 1. Advanced Natural Language Interfaces
|
397 |
+
|
398 |
+
Future LLM agents will feature more sophisticated natural language interfaces, allowing workers at all levels to interact with complex manufacturing systems using conversational language. This will democratize access to AI capabilities and enhance overall operational efficiency.
|
399 |
+
|
400 |
+
### 2. Enhanced Multi-modal Learning
|
401 |
+
|
402 |
+
Next-generation agents will be able to process and analyze data from a wider range of sources, including text, images, video, and sensor data. This will enable more comprehensive insights and decision-making capabilities across the manufacturing ecosystem.
|
403 |
+
|
404 |
+
### 3. Collaborative AI Systems
|
405 |
+
|
406 |
+
We'll see the emergence of AI ecosystems where multiple specialized agents collaborate to solve complex manufacturing challenges. For example, a design optimization agent might work in tandem with a supply chain agent and a quality control agent to develop new products that are optimized for both performance and manufacturability.
|
407 |
+
|
408 |
+
### 4. Quantum-enhanced AI
|
409 |
+
|
410 |
+
As quantum computing becomes more accessible, it will significantly enhance the capabilities of LLM agents, particularly in complex optimization problems common in manufacturing. This could lead to breakthroughs in areas such as materials science and process optimization.
|
411 |
+
|
412 |
+
### 5. Augmented Reality Integration
|
413 |
+
|
414 |
+
LLM agents will increasingly be integrated with augmented reality (AR) systems, providing real-time guidance and information to workers on the factory floor. This could revolutionize training, maintenance, and quality control processes.
|
415 |
+
|
416 |
+
### 6. Autonomous Factories
|
417 |
+
|
418 |
+
The ultimate vision is the development of fully autonomous factories where LLM agents orchestrate entire production processes with minimal human intervention. While this is still on the horizon, progressive implementation of autonomous systems will steadily move the industry in this direction.
|
419 |
+
|
420 |
+
### 7. Ethical AI and Explainable Decision-Making
|
421 |
+
|
422 |
+
As AI systems become more prevalent in critical manufacturing decisions, there will be an increased focus on developing ethical AI frameworks and enhancing the explainability of AI decision-making processes. This will be crucial for maintaining trust and meeting regulatory requirements.
|
423 |
+
|
424 |
+
### 8. Circular Economy Optimization
|
425 |
+
|
426 |
+
Future LLM agents will play a key role in optimizing manufacturing processes for sustainability and circular economy principles. This will include enhancing recycling processes, optimizing resource use, and designing products for easy disassembly and reuse.
|
427 |
+
|
428 |
+
To stay ahead in this rapidly evolving landscape, manufacturing executives should:
|
429 |
+
|
430 |
+
1. **Foster a Culture of Innovation**: Encourage experimentation with new AI technologies and applications.
|
431 |
+
|
432 |
+
2. **Invest in Continuous Learning**: Ensure your workforce is constantly upskilling to work effectively with advanced AI systems.
|
433 |
+
|
434 |
+
3. **Collaborate with AI Research Institutions**: Partner with universities and research labs to stay at the forefront of AI advancements in manufacturing.
|
435 |
+
|
436 |
+
4. **Participate in Industry Consortiums**: Join manufacturing technology consortiums to share knowledge and shape industry standards for AI adoption.
|
437 |
+
|
438 |
+
5. **Develop Flexible and Scalable AI Infrastructure**: Build systems that can easily incorporate new AI capabilities as they emerge.
|
439 |
+
|
440 |
+
6. **Monitor Regulatory Developments**: Stay informed about evolving regulations related to AI in manufacturing to ensure compliance and competitive advantage.
|
441 |
+
|
442 |
+
By embracing these future trends and preparing their organizations accordingly, manufacturing executives can position their companies to thrive in the AI-driven future of industry.
|
443 |
+
|
444 |
+
## 11. Conclusion <a name="conclusion"></a>
|
445 |
+
|
446 |
+
The integration of autonomous LLM agents with RAG embedding databases, function calling, and external tools represents a paradigm shift in manufacturing. This technology has the potential to dramatically reduce costs, drive revenue growth, and revolutionize how manufacturing enterprises operate.
|
447 |
+
|
448 |
+
Key takeaways for executives and CEOs:
|
449 |
+
|
450 |
+
1. **Transformative Potential**: Autonomous LLM agents can impact every aspect of manufacturing, from supply chain optimization to product innovation.
|
451 |
+
|
452 |
+
2. **Data-Driven Decision Making**: These AI systems enable more informed, real-time decision-making based on comprehensive data analysis.
|
453 |
+
|
454 |
+
3. **Competitive Advantage**: Early adopters of this technology are likely to gain significant competitive advantages in terms of efficiency, quality, and innovation.
|
455 |
+
|
456 |
+
4. **Holistic Implementation**: Success requires a strategic approach that addresses technology, processes, and people.
|
457 |
+
|
458 |
+
5. **Continuous Evolution**: The field of AI in manufacturing is rapidly advancing, necessitating ongoing investment and adaptation.
|
459 |
+
|
460 |
+
6. **Ethical Considerations**: As AI becomes more prevalent, addressing ethical concerns and maintaining transparency will be crucial.
|
461 |
+
|
462 |
+
7. **Future Readiness**: Preparing for future developments, such as quantum-enhanced AI and autonomous factories, will be key to long-term success.
|
463 |
+
|
464 |
+
The journey to implement autonomous LLM agents in manufacturing is complex but potentially transformative. It requires vision, commitment, and a willingness to reimagine traditional manufacturing processes. However, the potential rewards – in terms of cost savings, revenue growth, and competitive advantage – are substantial.
|
465 |
+
|
466 |
+
As a manufacturing executive or CEO, your role is to lead this transformation, fostering a culture of innovation and continuous improvement. By embracing the power of autonomous LLM agents, you can position your organization at the forefront of the next industrial revolution, driving sustainable growth and success in an increasingly competitive global marketplace.
|
467 |
+
|
468 |
+
The future of manufacturing is intelligent, autonomous, and data-driven. The time to act is now. Embrace the potential of autonomous LLM agents and lead your organization into a new era of manufacturing excellence.
|
docs/applications/business-analyst-agent.md
ADDED
@@ -0,0 +1,976 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Building Analyst Agents with Swarms to write Business Reports
|
2 |
+
|
3 |
+
> Jupyter Notebook accompanying this post is accessible at: [Business Analyst Agent Notebook](https://github.com/kyegomez/swarms/blob/master/examples/demos/business_analysis_swarm/business-analyst-agent.ipynb)
|
4 |
+
|
5 |
+
Solving a business problem often involves preparing a Business Case Report. This report comprehensively analyzes the problem, evaluates potential solutions, and provides evidence-based recommendations and an implementation plan to effectively address the issue and drive business value. While the process of preparing one requires an experienced business analyst, the workflow can be augmented using AI agents. Two candidates stick out as areas to work on:
|
6 |
+
|
7 |
+
- Developing an outline to solve the problem
|
8 |
+
- Doing background research and gathering data
|
9 |
+
|
10 |
+
In this post, we will explore how Swarms agents can be used to tackle a busuiness problem by outlining the solution, conducting background research and generating a preliminary report.
|
11 |
+
|
12 |
+
Before we proceed, this blog uses 3 API tools. Please obtain the following keys and store them in a `.env` file in the same folder as this file.
|
13 |
+
|
14 |
+
- **[OpenAI API](https://openai.com/blog/openai-api)** as `OPENAI_API_KEY`
|
15 |
+
- **[TavilyAI API](https://app.tavily.com/home)** `TAVILY_API_KEY`
|
16 |
+
- **[KayAI API](https://www.kay.ai/)** as `KAY_API_KEY`
|
17 |
+
|
18 |
+
```python
|
19 |
+
import dotenv
|
20 |
+
dotenv.load_dotenv() # Load environment variables from .env file
|
21 |
+
```
|
22 |
+
|
23 |
+
### Developing an Outline to solve the problem
|
24 |
+
|
25 |
+
Assume the business problem is: **How do we improve Nike's revenue in Q3 2024?** We first create a planning agent to break down the problem into dependent sub-problems.
|
26 |
+
|
27 |
+
|
28 |
+
#### Step 1. Defining the Data Model and Tool Schema
|
29 |
+
|
30 |
+
Using Pydantic, we define a structure to help the agent generate sub-problems.
|
31 |
+
|
32 |
+
- **QueryType:** Questions are either standalone or involve a combination of multiple others
|
33 |
+
- **Query:** Defines structure of a question.
|
34 |
+
- **QueryPlan:** Allows generation of a dependency graph of sub-questions
|
35 |
+
|
36 |
+
|
37 |
+
```python
|
38 |
+
import enum
|
39 |
+
from typing import List
|
40 |
+
from pydantic import Field, BaseModel
|
41 |
+
|
42 |
+
class QueryType(str, enum.Enum):
|
43 |
+
"""Enumeration representing the types of queries that can be asked to a question answer system."""
|
44 |
+
|
45 |
+
SINGLE_QUESTION = "SINGLE"
|
46 |
+
MERGE_MULTIPLE_RESPONSES = "MERGE_MULTIPLE_RESPONSES"
|
47 |
+
|
48 |
+
class Query(BaseModel):
|
49 |
+
"""Class representing a single question in a query plan."""
|
50 |
+
|
51 |
+
id: int = Field(..., description="Unique id of the query")
|
52 |
+
question: str = Field(
|
53 |
+
...,
|
54 |
+
description="Question asked using a question answering system",
|
55 |
+
)
|
56 |
+
dependencies: List[int] = Field(
|
57 |
+
default_factory=list,
|
58 |
+
description="List of sub questions that need to be answered before asking this question",
|
59 |
+
)
|
60 |
+
node_type: QueryType = Field(
|
61 |
+
default=QueryType.SINGLE_QUESTION,
|
62 |
+
description="Type of question, either a single question or a multi-question merge",
|
63 |
+
)
|
64 |
+
|
65 |
+
class QueryPlan(BaseModel):
|
66 |
+
"""Container class representing a tree of questions to ask a question answering system."""
|
67 |
+
|
68 |
+
query_graph: List[Query] = Field(
|
69 |
+
..., description="The query graph representing the plan"
|
70 |
+
)
|
71 |
+
|
72 |
+
def _dependencies(self, ids: List[int]) -> List[Query]:
|
73 |
+
"""Returns the dependencies of a query given their ids."""
|
74 |
+
|
75 |
+
return [q for q in self.query_graph if q.id in ids]
|
76 |
+
```
|
77 |
+
|
78 |
+
Also, a `tool_schema` needs to be defined. It is an instance of `QueryPlan` and is used to initialize the agent.
|
79 |
+
|
80 |
+
```python
|
81 |
+
tool_schema = QueryPlan(
|
82 |
+
query_graph = [query.dict() for query in [
|
83 |
+
Query(
|
84 |
+
id=1,
|
85 |
+
question="How do we improve Nike's revenue in Q3 2024?",
|
86 |
+
dependencies=[2],
|
87 |
+
node_type=QueryType('SINGLE')
|
88 |
+
),
|
89 |
+
# ... other queries ...
|
90 |
+
]]
|
91 |
+
)
|
92 |
+
```
|
93 |
+
|
94 |
+
#### Step 2. Defining the Planning Agent
|
95 |
+
|
96 |
+
We specify the query, task specification and an appropriate system prompt.
|
97 |
+
|
98 |
+
```python
|
99 |
+
from swarm_models import OpenAIChat
|
100 |
+
from swarms import Agent
|
101 |
+
|
102 |
+
query = "How do we improve Nike's revenue in Q3 2024?"
|
103 |
+
task = f"Consider: {query}. Generate just the correct query plan in JSON format."
|
104 |
+
system_prompt = (
|
105 |
+
"You are a world class query planning algorithm "
|
106 |
+
"capable of breaking apart questions into its "
|
107 |
+
"dependency queries such that the answers can be "
|
108 |
+
"used to inform the parent question. Do not answer "
|
109 |
+
"the questions, simply provide a correct compute "
|
110 |
+
"graph with good specific questions to ask and relevant "
|
111 |
+
"dependencies. Before you call the function, think "
|
112 |
+
"step-by-step to get a better understanding of the problem."
|
113 |
+
)
|
114 |
+
llm = OpenAIChat(
|
115 |
+
temperature=0.0, model_name="gpt-4", max_tokens=4000
|
116 |
+
)
|
117 |
+
```
|
118 |
+
|
119 |
+
Then, we proceed with agent definition.
|
120 |
+
|
121 |
+
```python
|
122 |
+
# Initialize the agent
|
123 |
+
agent = Agent(
|
124 |
+
agent_name="Query Planner",
|
125 |
+
system_prompt=system_prompt,
|
126 |
+
# Set the tool schema to the JSON string -- this is the key difference
|
127 |
+
tool_schema=tool_schema,
|
128 |
+
llm=llm,
|
129 |
+
max_loops=1,
|
130 |
+
autosave=True,
|
131 |
+
dashboard=False,
|
132 |
+
streaming_on=True,
|
133 |
+
verbose=True,
|
134 |
+
interactive=False,
|
135 |
+
# Set the output type to the tool schema which is a BaseModel
|
136 |
+
output_type=tool_schema, # or dict, or str
|
137 |
+
metadata_output_type="json",
|
138 |
+
# List of schemas that the agent can handle
|
139 |
+
list_base_models=[tool_schema],
|
140 |
+
function_calling_format_type="OpenAI",
|
141 |
+
function_calling_type="json", # or soon yaml
|
142 |
+
)
|
143 |
+
```
|
144 |
+
|
145 |
+
#### Step 3. Obtaining Outline from Planning Agent
|
146 |
+
|
147 |
+
We now run the agent, and since its output is in JSON format, we can load it as a dictionary.
|
148 |
+
|
149 |
+
```python
|
150 |
+
generated_data = agent.run(task)
|
151 |
+
```
|
152 |
+
|
153 |
+
At times the agent could return extra content other than JSON. Below function will filter it out.
|
154 |
+
|
155 |
+
```python
|
156 |
+
def process_json_output(content):
|
157 |
+
# Find the index of the first occurrence of '```json\n'
|
158 |
+
start_index = content.find('```json\n')
|
159 |
+
if start_index == -1:
|
160 |
+
# If '```json\n' is not found, return the original content
|
161 |
+
return content
|
162 |
+
# Return the part of the content after '```json\n' and remove the '```' at the end
|
163 |
+
return content[start_index + len('```json\n'):].rstrip('`')
|
164 |
+
|
165 |
+
# Use the function to clean up the output
|
166 |
+
json_content = process_json_output(generated_data.content)
|
167 |
+
|
168 |
+
import json
|
169 |
+
|
170 |
+
# Load the JSON string into a Python object
|
171 |
+
json_object = json.loads(json_content)
|
172 |
+
|
173 |
+
# Convert the Python object back to a JSON string
|
174 |
+
json_content = json.dumps(json_object, indent=2)
|
175 |
+
|
176 |
+
# Print the JSON string
|
177 |
+
print(json_content)
|
178 |
+
```
|
179 |
+
|
180 |
+
Below is the output this produces
|
181 |
+
|
182 |
+
```json
|
183 |
+
{
|
184 |
+
"main_query": "How do we improve Nike's revenue in Q3 2024?",
|
185 |
+
"sub_queries": [
|
186 |
+
{
|
187 |
+
"id": "1",
|
188 |
+
"query": "What is Nike's current revenue trend?"
|
189 |
+
},
|
190 |
+
{
|
191 |
+
"id": "2",
|
192 |
+
"query": "What are the projected market trends for the sports apparel industry in 2024?"
|
193 |
+
},
|
194 |
+
{
|
195 |
+
"id": "3",
|
196 |
+
"query": "What are the current successful strategies being used by Nike's competitors?",
|
197 |
+
"dependencies": [
|
198 |
+
"2"
|
199 |
+
]
|
200 |
+
},
|
201 |
+
{
|
202 |
+
"id": "4",
|
203 |
+
"query": "What are the current and projected economic conditions in Nike's major markets?",
|
204 |
+
"dependencies": [
|
205 |
+
"2"
|
206 |
+
]
|
207 |
+
},
|
208 |
+
{
|
209 |
+
"id": "5",
|
210 |
+
"query": "What are the current consumer preferences in the sports apparel industry?",
|
211 |
+
"dependencies": [
|
212 |
+
"2"
|
213 |
+
]
|
214 |
+
},
|
215 |
+
{
|
216 |
+
"id": "6",
|
217 |
+
"query": "What are the potential areas of improvement in Nike's current business model?",
|
218 |
+
"dependencies": [
|
219 |
+
"1"
|
220 |
+
]
|
221 |
+
},
|
222 |
+
{
|
223 |
+
"id": "7",
|
224 |
+
"query": "What are the potential new markets for Nike to explore in 2024?",
|
225 |
+
"dependencies": [
|
226 |
+
"2",
|
227 |
+
"4"
|
228 |
+
]
|
229 |
+
},
|
230 |
+
{
|
231 |
+
"id": "8",
|
232 |
+
"query": "What are the potential new products or services Nike could introduce in 2024?",
|
233 |
+
"dependencies": [
|
234 |
+
"5"
|
235 |
+
]
|
236 |
+
},
|
237 |
+
{
|
238 |
+
"id": "9",
|
239 |
+
"query": "What are the potential marketing strategies Nike could use to increase its revenue in Q3 2024?",
|
240 |
+
"dependencies": [
|
241 |
+
"3",
|
242 |
+
"5",
|
243 |
+
"7",
|
244 |
+
"8"
|
245 |
+
]
|
246 |
+
},
|
247 |
+
{
|
248 |
+
"id": "10",
|
249 |
+
"query": "What are the potential cost-saving strategies Nike could implement to increase its net revenue in Q3 2024?",
|
250 |
+
"dependencies": [
|
251 |
+
"6"
|
252 |
+
]
|
253 |
+
}
|
254 |
+
]
|
255 |
+
}
|
256 |
+
```
|
257 |
+
|
258 |
+
The JSON dictionary is not convenient for humans to process. We make a directed graph out of it.
|
259 |
+
|
260 |
+
```python
|
261 |
+
import networkx as nx
|
262 |
+
import matplotlib.pyplot as plt
|
263 |
+
import textwrap
|
264 |
+
import random
|
265 |
+
|
266 |
+
# Create a directed graph
|
267 |
+
G = nx.DiGraph()
|
268 |
+
|
269 |
+
# Define a color map
|
270 |
+
color_map = {}
|
271 |
+
|
272 |
+
# Add nodes and edges to the graph
|
273 |
+
for sub_query in json_object['sub_queries']:
|
274 |
+
# Check if 'dependencies' key exists in sub_query, if not, initialize it as an empty list
|
275 |
+
if 'dependencies' not in sub_query:
|
276 |
+
sub_query['dependencies'] = []
|
277 |
+
# Assign a random color for each node
|
278 |
+
color_map[sub_query['id']] = "#{:06x}".format(random.randint(0, 0xFFFFFF))
|
279 |
+
G.add_node(sub_query['id'], label=textwrap.fill(sub_query['query'], width=20))
|
280 |
+
for dependency in sub_query['dependencies']:
|
281 |
+
G.add_edge(dependency, sub_query['id'])
|
282 |
+
|
283 |
+
# Draw the graph
|
284 |
+
pos = nx.spring_layout(G)
|
285 |
+
nx.draw(G, pos, with_labels=True, node_size=800, node_color=[color_map[node] for node in G.nodes()], node_shape="o", alpha=0.5, linewidths=40)
|
286 |
+
|
287 |
+
# Prepare labels for legend
|
288 |
+
labels = nx.get_node_attributes(G, 'label')
|
289 |
+
handles = [plt.Line2D([0], [0], marker='o', color=color_map[node], label=f"{node}: {label}", markersize=10, linestyle='None') for node, label in labels.items()]
|
290 |
+
|
291 |
+
# Create a legend
|
292 |
+
plt.legend(handles=handles, title="Queries", bbox_to_anchor=(1.05, 1), loc='upper left')
|
293 |
+
|
294 |
+
plt.show()
|
295 |
+
```
|
296 |
+
|
297 |
+
This produces the below diagram which makes the plan much more convenient to understand.
|
298 |
+
|
299 |
+
![Query Plan Diagram](../assets/img/docs/query-plan.png)
|
300 |
+
|
301 |
+
### Doing Background Research and Gathering Data
|
302 |
+
|
303 |
+
At this point, we have solved the first half of the problem. We have an outline consisting of sub-problems to to tackled to solve our business problem. This will form the overall structure of our report. We now need to research information for each sub-problem in order to write an informed report. This mechanically intensive and is the aspect that will most benefit from Agentic intervention.
|
304 |
+
|
305 |
+
Essentially, we can spawn parallel agents to gather the data. Each agent will have 2 tools:
|
306 |
+
|
307 |
+
- Internet access
|
308 |
+
- Financial data retrieval
|
309 |
+
|
310 |
+
As they run parallely, they will add their knowledge into a common long-term memory. We will then spawn a separate report writing agent with access to this memory to generate our business case report.
|
311 |
+
|
312 |
+
#### Step 4. Defining Tools for Worker Agents
|
313 |
+
|
314 |
+
Let us first define the 2 tools.
|
315 |
+
|
316 |
+
```python
|
317 |
+
import os
|
318 |
+
from typing import List, Dict
|
319 |
+
|
320 |
+
from swarms import tool
|
321 |
+
|
322 |
+
os.environ['TAVILY_API_KEY'] = os.getenv('TAVILY_API_KEY')
|
323 |
+
os.environ["KAY_API_KEY"] = os.getenv('KAY_API_KEY')
|
324 |
+
|
325 |
+
from langchain_community.tools.tavily_search import TavilySearchResults
|
326 |
+
from langchain_core.pydantic_v1 import BaseModel, Field
|
327 |
+
|
328 |
+
from kay.rag.retrievers import KayRetriever
|
329 |
+
|
330 |
+
def browser(query: str) -> str:
|
331 |
+
"""
|
332 |
+
Search the query in the browser with the Tavily API tool.
|
333 |
+
Args:
|
334 |
+
query (str): The query to search in the browser.
|
335 |
+
Returns:
|
336 |
+
str: The search results
|
337 |
+
"""
|
338 |
+
internet_search = TavilySearchResults()
|
339 |
+
results = internet_search.invoke({"query": query})
|
340 |
+
response = ''
|
341 |
+
for result in results:
|
342 |
+
response += (result['content'] + '\n')
|
343 |
+
return response
|
344 |
+
|
345 |
+
def kay_retriever(query: str) -> str:
|
346 |
+
"""
|
347 |
+
Search the financial data query with the KayAI API tool.
|
348 |
+
Args:
|
349 |
+
query (str): The query to search in the KayRetriever.
|
350 |
+
Returns:
|
351 |
+
str: The first context retrieved as a string.
|
352 |
+
"""
|
353 |
+
# Initialize the retriever
|
354 |
+
retriever = KayRetriever(dataset_id = "company", data_types=["10-K", "10-Q", "8-K", "PressRelease"])
|
355 |
+
# Query the retriever
|
356 |
+
context = retriever.query(query=query,num_context=1)
|
357 |
+
return context[0]['chunk_embed_text']
|
358 |
+
```
|
359 |
+
|
360 |
+
#### Step 5. Defining Long-Term Memory
|
361 |
+
|
362 |
+
As mentioned previously, the worker agents running parallely, will pool their knowledge into a common memory. Let us define that.
|
363 |
+
|
364 |
+
```python
|
365 |
+
import logging
|
366 |
+
import os
|
367 |
+
import uuid
|
368 |
+
from typing import Callable, List, Optional
|
369 |
+
|
370 |
+
import chromadb
|
371 |
+
import numpy as np
|
372 |
+
from dotenv import load_dotenv
|
373 |
+
|
374 |
+
from swarms.utils.data_to_text import data_to_text
|
375 |
+
from swarms.utils.markdown_message import display_markdown_message
|
376 |
+
from swarms_memory import AbstractVectorDatabase
|
377 |
+
|
378 |
+
|
379 |
+
# Results storage using local ChromaDB
|
380 |
+
class ChromaDB(AbstractVectorDatabase):
|
381 |
+
"""
|
382 |
+
|
383 |
+
ChromaDB database
|
384 |
+
|
385 |
+
Args:
|
386 |
+
metric (str): The similarity metric to use.
|
387 |
+
output (str): The name of the collection to store the results in.
|
388 |
+
limit_tokens (int, optional): The maximum number of tokens to use for the query. Defaults to 1000.
|
389 |
+
n_results (int, optional): The number of results to retrieve. Defaults to 2.
|
390 |
+
|
391 |
+
Methods:
|
392 |
+
add: _description_
|
393 |
+
query: _description_
|
394 |
+
|
395 |
+
Examples:
|
396 |
+
>>> chromadb = ChromaDB(
|
397 |
+
>>> metric="cosine",
|
398 |
+
>>> output="results",
|
399 |
+
>>> llm="gpt3",
|
400 |
+
>>> openai_api_key=OPENAI_API_KEY,
|
401 |
+
>>> )
|
402 |
+
>>> chromadb.add(task, result, result_id)
|
403 |
+
"""
|
404 |
+
|
405 |
+
def __init__(
|
406 |
+
self,
|
407 |
+
metric: str = "cosine",
|
408 |
+
output_dir: str = "swarms",
|
409 |
+
limit_tokens: Optional[int] = 1000,
|
410 |
+
n_results: int = 3,
|
411 |
+
embedding_function: Callable = None,
|
412 |
+
docs_folder: str = None,
|
413 |
+
verbose: bool = False,
|
414 |
+
*args,
|
415 |
+
**kwargs,
|
416 |
+
):
|
417 |
+
self.metric = metric
|
418 |
+
self.output_dir = output_dir
|
419 |
+
self.limit_tokens = limit_tokens
|
420 |
+
self.n_results = n_results
|
421 |
+
self.docs_folder = docs_folder
|
422 |
+
self.verbose = verbose
|
423 |
+
|
424 |
+
# Disable ChromaDB logging
|
425 |
+
if verbose:
|
426 |
+
logging.getLogger("chromadb").setLevel(logging.INFO)
|
427 |
+
|
428 |
+
# Create Chroma collection
|
429 |
+
chroma_persist_dir = "chroma"
|
430 |
+
chroma_client = chromadb.PersistentClient(
|
431 |
+
settings=chromadb.config.Settings(
|
432 |
+
persist_directory=chroma_persist_dir,
|
433 |
+
),
|
434 |
+
*args,
|
435 |
+
**kwargs,
|
436 |
+
)
|
437 |
+
|
438 |
+
# Embedding model
|
439 |
+
if embedding_function:
|
440 |
+
self.embedding_function = embedding_function
|
441 |
+
else:
|
442 |
+
self.embedding_function = None
|
443 |
+
|
444 |
+
# Create ChromaDB client
|
445 |
+
self.client = chromadb.Client()
|
446 |
+
|
447 |
+
# Create Chroma collection
|
448 |
+
self.collection = chroma_client.get_or_create_collection(
|
449 |
+
name=output_dir,
|
450 |
+
metadata={"hnsw:space": metric},
|
451 |
+
embedding_function=self.embedding_function,
|
452 |
+
# data_loader=self.data_loader,
|
453 |
+
*args,
|
454 |
+
**kwargs,
|
455 |
+
)
|
456 |
+
display_markdown_message(
|
457 |
+
"ChromaDB collection created:"
|
458 |
+
f" {self.collection.name} with metric: {self.metric} and"
|
459 |
+
f" output directory: {self.output_dir}"
|
460 |
+
)
|
461 |
+
|
462 |
+
# If docs
|
463 |
+
if docs_folder:
|
464 |
+
display_markdown_message(
|
465 |
+
f"Traversing directory: {docs_folder}"
|
466 |
+
)
|
467 |
+
self.traverse_directory()
|
468 |
+
|
469 |
+
def add(
|
470 |
+
self,
|
471 |
+
document: str,
|
472 |
+
*args,
|
473 |
+
**kwargs,
|
474 |
+
):
|
475 |
+
"""
|
476 |
+
Add a document to the ChromaDB collection.
|
477 |
+
|
478 |
+
Args:
|
479 |
+
document (str): The document to be added.
|
480 |
+
condition (bool, optional): The condition to check before adding the document. Defaults to True.
|
481 |
+
|
482 |
+
Returns:
|
483 |
+
str: The ID of the added document.
|
484 |
+
"""
|
485 |
+
try:
|
486 |
+
doc_id = str(uuid.uuid4())
|
487 |
+
self.collection.add(
|
488 |
+
ids=[doc_id],
|
489 |
+
documents=[document],
|
490 |
+
*args,
|
491 |
+
**kwargs,
|
492 |
+
)
|
493 |
+
print('-----------------')
|
494 |
+
print("Document added successfully")
|
495 |
+
print('-----------------')
|
496 |
+
return doc_id
|
497 |
+
except Exception as e:
|
498 |
+
raise Exception(f"Failed to add document: {str(e)}")
|
499 |
+
|
500 |
+
def query(
|
501 |
+
self,
|
502 |
+
query_text: str,
|
503 |
+
*args,
|
504 |
+
**kwargs,
|
505 |
+
):
|
506 |
+
"""
|
507 |
+
Query documents from the ChromaDB collection.
|
508 |
+
|
509 |
+
Args:
|
510 |
+
query (str): The query string.
|
511 |
+
n_docs (int, optional): The number of documents to retrieve. Defaults to 1.
|
512 |
+
|
513 |
+
Returns:
|
514 |
+
dict: The retrieved documents.
|
515 |
+
"""
|
516 |
+
try:
|
517 |
+
docs = self.collection.query(
|
518 |
+
query_texts=[query_text],
|
519 |
+
n_results=self.n_results,
|
520 |
+
*args,
|
521 |
+
**kwargs,
|
522 |
+
)["documents"]
|
523 |
+
return docs[0]
|
524 |
+
except Exception as e:
|
525 |
+
raise Exception(f"Failed to query documents: {str(e)}")
|
526 |
+
|
527 |
+
def traverse_directory(self):
|
528 |
+
"""
|
529 |
+
Traverse through every file in the given directory and its subdirectories,
|
530 |
+
and return the paths of all files.
|
531 |
+
Parameters:
|
532 |
+
- directory_name (str): The name of the directory to traverse.
|
533 |
+
Returns:
|
534 |
+
- list: A list of paths to each file in the directory and its subdirectories.
|
535 |
+
"""
|
536 |
+
added_to_db = False
|
537 |
+
|
538 |
+
for root, dirs, files in os.walk(self.docs_folder):
|
539 |
+
for file in files:
|
540 |
+
file = os.path.join(self.docs_folder, file)
|
541 |
+
_, ext = os.path.splitext(file)
|
542 |
+
data = data_to_text(file)
|
543 |
+
added_to_db = self.add([data])
|
544 |
+
print(f"{file} added to Database")
|
545 |
+
|
546 |
+
return added_to_db
|
547 |
+
```
|
548 |
+
|
549 |
+
We can now proceed to initialize the memory.
|
550 |
+
|
551 |
+
```python
|
552 |
+
from chromadb.utils import embedding_functions
|
553 |
+
default_ef = embedding_functions.DefaultEmbeddingFunction()
|
554 |
+
|
555 |
+
memory = ChromaDB(
|
556 |
+
metric="cosine",
|
557 |
+
n_results=3,
|
558 |
+
output_dir="results",
|
559 |
+
embedding_function=default_ef
|
560 |
+
)
|
561 |
+
```
|
562 |
+
|
563 |
+
#### Step 6. Defining Worker Agents
|
564 |
+
|
565 |
+
The Worker Agent sub-classes the `Agent` class. The only different between these 2 is in how the `run()` method works. In the `Agent` class, `run()` simply returns the set of tool commands to run, but does not execute it. We, however, desire this. In addition, after we run our tools, we get the relevant information as output. We want to add this information to our memory. Hence, to incorporate these 2 changes, we define `WorkerAgent` as follows.
|
566 |
+
|
567 |
+
```python
|
568 |
+
class WorkerAgent(Agent):
|
569 |
+
def __init__(self, *args, **kwargs):
|
570 |
+
super().__init__(*args, **kwargs)
|
571 |
+
|
572 |
+
def run(self, task, *args, **kwargs):
|
573 |
+
response = super().run(task, *args, **kwargs)
|
574 |
+
print(response.content)
|
575 |
+
|
576 |
+
json_dict = json.loads(process_json_output(response.content))
|
577 |
+
|
578 |
+
#print(json.dumps(json_dict, indent=2))
|
579 |
+
|
580 |
+
if response!=None:
|
581 |
+
try:
|
582 |
+
commands = json_dict["commands"]
|
583 |
+
except:
|
584 |
+
commands = [json_dict['command']]
|
585 |
+
|
586 |
+
for command in commands:
|
587 |
+
tool_name = command["name"]
|
588 |
+
|
589 |
+
if tool_name not in ['browser', 'kay_retriever']:
|
590 |
+
continue
|
591 |
+
|
592 |
+
query = command["args"]["query"]
|
593 |
+
|
594 |
+
# Get the tool by its name
|
595 |
+
tool = globals()[tool_name]
|
596 |
+
tool_response = tool(query)
|
597 |
+
|
598 |
+
# Add tool's output to long term memory
|
599 |
+
self.long_term_memory.add(tool_response)
|
600 |
+
```
|
601 |
+
|
602 |
+
We can then instantiate an object of the `WorkerAgent` class.
|
603 |
+
|
604 |
+
```python
|
605 |
+
worker_agent = WorkerAgent(
|
606 |
+
agent_name="Worker Agent",
|
607 |
+
system_prompt=(
|
608 |
+
"Autonomous agent that can interact with browser, "
|
609 |
+
"financial data retriever and other agents. Be Helpful "
|
610 |
+
"and Kind. Use the tools provided to assist the user. "
|
611 |
+
"Generate the plan with list of commands in JSON format."
|
612 |
+
),
|
613 |
+
llm=OpenAIChat(
|
614 |
+
temperature=0.0, model_name="gpt-4", max_tokens=4000
|
615 |
+
),
|
616 |
+
max_loops="auto",
|
617 |
+
autosave=True,
|
618 |
+
dashboard=False,
|
619 |
+
streaming_on=True,
|
620 |
+
verbose=True,
|
621 |
+
stopping_token="<DONE>",
|
622 |
+
interactive=True,
|
623 |
+
tools=[browser, kay_retriever],
|
624 |
+
long_term_memory=memory,
|
625 |
+
code_interpreter=True,
|
626 |
+
)
|
627 |
+
```
|
628 |
+
|
629 |
+
#### Step 7. Running the Worker Agents
|
630 |
+
|
631 |
+
At this point, we need to setup a concurrent workflow. While the order of adding tasks to the workflow doesn't matter (since they will all run concurrently late when executed), we can take some time to define an order for these tasks. This order will come in handy later when writing the report using our Writer Agent.
|
632 |
+
|
633 |
+
The order we will follow is Breadth First Traversal (BFT) of the sub-queries in the graph we had made earlier (shown below again for reference). BFT makes sense to be used here because we want all the dependent parent questions to be answered before answering the child question. Also, since we could have independent subgraphs, we will also perform BFT separately on each subgraph.
|
634 |
+
|
635 |
+
![Query Plan Mini](../assets/img/docs/query-plan-mini.png)
|
636 |
+
|
637 |
+
Below is the code that produces the order of processing sub-queries.
|
638 |
+
|
639 |
+
```python
|
640 |
+
from collections import deque, defaultdict
|
641 |
+
|
642 |
+
# Define the graph nodes
|
643 |
+
nodes = json_object['sub_queries']
|
644 |
+
|
645 |
+
# Create a graph from the nodes
|
646 |
+
graph = defaultdict(list)
|
647 |
+
for node in nodes:
|
648 |
+
for dependency in node['dependencies']:
|
649 |
+
graph[dependency].append(node['id'])
|
650 |
+
|
651 |
+
# Find all nodes with no dependencies (potential starting points)
|
652 |
+
start_nodes = [node['id'] for node in nodes if not node['dependencies']]
|
653 |
+
|
654 |
+
# Adjust the BFT function to handle dependencies correctly
|
655 |
+
def bft_corrected(start, graph, nodes_info):
|
656 |
+
visited = set()
|
657 |
+
queue = deque([start])
|
658 |
+
order = []
|
659 |
+
|
660 |
+
while queue:
|
661 |
+
node = queue.popleft()
|
662 |
+
if node not in visited:
|
663 |
+
# Check if all dependencies of the current node are visited
|
664 |
+
node_dependencies = [n['id'] for n in nodes if n['id'] == node][0]
|
665 |
+
dependencies_met = all(dep in visited for dep in nodes_info[node_dependencies]['dependencies'])
|
666 |
+
|
667 |
+
if dependencies_met:
|
668 |
+
visited.add(node)
|
669 |
+
order.append(node)
|
670 |
+
# Add only nodes to the queue whose dependencies are fully met
|
671 |
+
for next_node in graph[node]:
|
672 |
+
if all(dep in visited for dep in nodes_info[next_node]['dependencies']):
|
673 |
+
queue.append(next_node)
|
674 |
+
else:
|
675 |
+
# Requeue the node to check dependencies later
|
676 |
+
queue.append(node)
|
677 |
+
|
678 |
+
return order
|
679 |
+
|
680 |
+
# Dictionary to access node information quickly
|
681 |
+
nodes_info = {node['id']: node for node in nodes}
|
682 |
+
|
683 |
+
# Perform BFT for each unvisited start node using the corrected BFS function
|
684 |
+
visited_global = set()
|
685 |
+
bfs_order = []
|
686 |
+
|
687 |
+
for start in start_nodes:
|
688 |
+
if start not in visited_global:
|
689 |
+
order = bft_corrected(start, graph, nodes_info)
|
690 |
+
bfs_order.extend(order)
|
691 |
+
visited_global.update(order)
|
692 |
+
|
693 |
+
print("BFT Order:", bfs_order)
|
694 |
+
```
|
695 |
+
|
696 |
+
This produces the following output.
|
697 |
+
|
698 |
+
```python
|
699 |
+
BFT Order: ['1', '6', '10', '2', '3', '4', '5', '7', '8', '9']
|
700 |
+
```
|
701 |
+
|
702 |
+
Now, let's define our `ConcurrentWorkflow` and run it.
|
703 |
+
|
704 |
+
```python
|
705 |
+
import os
|
706 |
+
from dotenv import load_dotenv
|
707 |
+
from swarms import Agent, ConcurrentWorkflow, OpenAIChat, Task
|
708 |
+
|
709 |
+
# Create a workflow
|
710 |
+
workflow = ConcurrentWorkflow(max_workers=5)
|
711 |
+
task_list = []
|
712 |
+
|
713 |
+
for node in bfs_order:
|
714 |
+
sub_query =nodes_info[node]['query']
|
715 |
+
task = Task(worker_agent, sub_query)
|
716 |
+
print('-----------------')
|
717 |
+
print("Added task: ", sub_query)
|
718 |
+
print('-----------------')
|
719 |
+
task_list.append(task)
|
720 |
+
|
721 |
+
workflow.add(tasks=task_list)
|
722 |
+
|
723 |
+
# Run the workflow
|
724 |
+
workflow.run()
|
725 |
+
```
|
726 |
+
|
727 |
+
Below is part of the output this workflow produces. We clearly see the thought process of the agent and the plan it came up to solve a particular sub-query. In addition, we see the tool-calling schema it produces in `"command"`.
|
728 |
+
|
729 |
+
```python
|
730 |
+
...
|
731 |
+
...
|
732 |
+
content='\n{\n "thoughts": {\n "text": "To find out Nike\'s current revenue trend, I will use the financial data retriever tool to search for \'Nike revenue trend\'.",\n "reasoning": "The financial data retriever tool allows me to search for specific financial data, so I can look up the current revenue trend of Nike.", \n "plan": "Use the financial data retriever tool to search for \'Nike revenue trend\'. Parse the result to get the current revenue trend and format that into a readable report."\n },\n "command": {\n "name": "kay_retriever", \n "args": {\n "query": "Nike revenue trend"\n }\n }\n}\n```' response_metadata={'token_usage': {'completion_tokens': 152, 'prompt_tokens': 1527, 'total_tokens': 1679}, 'model_name': 'gpt-4', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}
|
733 |
+
Saved agent state to: Worker Agent_state.json
|
734 |
+
|
735 |
+
{
|
736 |
+
"thoughts": {
|
737 |
+
"text": "To find out Nike's current revenue trend, I will use the financial data retriever tool to search for 'Nike revenue trend'.",
|
738 |
+
"reasoning": "The financial data retriever tool allows me to search for specific financial data, so I can look up the current revenue trend of Nike.",
|
739 |
+
"plan": "Use the financial data retriever tool to search for 'Nike revenue trend'. Parse the result to get the current revenue trend and format that into a readable report."
|
740 |
+
},
|
741 |
+
"command": {
|
742 |
+
"name": "kay_retriever",
|
743 |
+
"args": {
|
744 |
+
"query": "Nike revenue trend"
|
745 |
+
}
|
746 |
+
}
|
747 |
+
}
|
748 |
+
|
749 |
+
-----------------
|
750 |
+
Document added successfully
|
751 |
+
-----------------
|
752 |
+
...
|
753 |
+
...
|
754 |
+
```
|
755 |
+
|
756 |
+
Here, `"name"` pertains to the name of the tool to be called and `"args"` is the arguments to be passed to the tool call. Like mentioned before, we modify `Agent`'s default behaviour in `WorkerAgent`. Hence, the tool call is executed here and its results (information from web pages and Kay Retriever API) are added to long-term memory. We get confirmation for this from the message `Document added successfully`.
|
757 |
+
|
758 |
+
|
759 |
+
#### Step 7. Generating the report using Writer Agent
|
760 |
+
|
761 |
+
At this point, our Worker Agents have gathered all the background information required to generate the report. We have also defined a coherent structure to write the report, which is following the BFT order to answering the sub-queries. Now it's time to define a Writer Agent and call it sequentially in the order of sub-queries.
|
762 |
+
|
763 |
+
```python
|
764 |
+
from swarms import Agent, OpenAIChat, tool
|
765 |
+
|
766 |
+
agent = Agent(
|
767 |
+
agent_name="Writer Agent",
|
768 |
+
agent_description=(
|
769 |
+
"This agent writes reports based on information in long-term memory"
|
770 |
+
),
|
771 |
+
system_prompt=(
|
772 |
+
"You are a world-class financial report writer. "
|
773 |
+
"Write analytical and accurate responses using memory to answer the query. "
|
774 |
+
"Do not mention use of long-term memory in the report. "
|
775 |
+
"Do not mention Writer Agent in response."
|
776 |
+
"Return only response content in strict markdown format."
|
777 |
+
),
|
778 |
+
llm=OpenAIChat(temperature=0.2, model='gpt-3.5-turbo'),
|
779 |
+
max_loops=1,
|
780 |
+
autosave=True,
|
781 |
+
verbose=True,
|
782 |
+
long_term_memory=memory,
|
783 |
+
)
|
784 |
+
```
|
785 |
+
|
786 |
+
The report individual sections of the report will be collected in a list.
|
787 |
+
|
788 |
+
```python
|
789 |
+
report = []
|
790 |
+
```
|
791 |
+
|
792 |
+
Let us now run the writer agent.
|
793 |
+
|
794 |
+
```python
|
795 |
+
for node in bfs_order:
|
796 |
+
sub_query =nodes_info[node]['query']
|
797 |
+
print("Running task: ", sub_query)
|
798 |
+
out = agent.run(f"Consider: {sub_query}. Write response in strict markdown format using long-term memory. Do not mention Writer Agent in response.")
|
799 |
+
print(out)
|
800 |
+
try:
|
801 |
+
report.append(out.content)
|
802 |
+
except:
|
803 |
+
pass
|
804 |
+
```
|
805 |
+
|
806 |
+
Now, we need to clean up the repoort a bit to make it render professionally.
|
807 |
+
|
808 |
+
```python
|
809 |
+
# Remove any content before the first "#" as that signals start of heading
|
810 |
+
# Anything before this usually contains filler content
|
811 |
+
stripped_report = [entry[entry.find('#'):] if '#' in entry else entry for entry in report]
|
812 |
+
report = stripped_report
|
813 |
+
|
814 |
+
# At times the LLM outputs \\n instead of \n
|
815 |
+
cleaned_report = [entry.replace("\\n", "\n") for entry in report]
|
816 |
+
import re
|
817 |
+
|
818 |
+
# Function to clean up unnecessary metadata from the report entries
|
819 |
+
def clean_report(report):
|
820 |
+
cleaned_report = []
|
821 |
+
for entry in report:
|
822 |
+
# This pattern matches 'response_metadata={' followed by any characters that are not '}' (non-greedy),
|
823 |
+
# possibly nested inside other braces, until the closing '}'.
|
824 |
+
cleaned_entry = re.sub(r"response_metadata=\{[^{}]*(?:\{[^{}]*\}[^{}]*)*\}", "", entry, flags=re.DOTALL)
|
825 |
+
cleaned_report.append(cleaned_entry)
|
826 |
+
return cleaned_report
|
827 |
+
|
828 |
+
# Apply the cleaning function to the markdown report
|
829 |
+
cleaned_report = clean_report(cleaned_report)
|
830 |
+
```
|
831 |
+
|
832 |
+
After cleaning, we append parts of the report together to get out final report.
|
833 |
+
|
834 |
+
```python
|
835 |
+
final_report = ' \n '.join(cleaned_report)
|
836 |
+
```
|
837 |
+
|
838 |
+
In Jupyter Notebook, we can use the below code to render it in Markdown.
|
839 |
+
|
840 |
+
```python
|
841 |
+
from IPython.display import display, Markdown
|
842 |
+
|
843 |
+
display(Markdown(final_report))
|
844 |
+
```
|
845 |
+
|
846 |
+
|
847 |
+
## Final Generated Report
|
848 |
+
|
849 |
+
|
850 |
+
### Nike's Current Revenue Trend
|
851 |
+
|
852 |
+
Nike's current revenue trend has been steadily increasing over the past few years. In the most recent fiscal year, Nike reported a revenue of $37.4 billion, which was a 7% increase from the previous year. This growth can be attributed to strong sales in key markets, successful marketing campaigns, and a focus on innovation in product development. Overall, Nike continues to demonstrate strong financial performance and is well-positioned for future growth.
|
853 |
+
### Potential Areas of Improvement in Nike's Business Model
|
854 |
+
|
855 |
+
1. **Sustainability Practices**: Nike could further enhance its sustainability efforts by reducing its carbon footprint, using more eco-friendly materials, and ensuring ethical labor practices throughout its supply chain.
|
856 |
+
|
857 |
+
2. **Diversification of Product Portfolio**: While Nike is known for its athletic footwear and apparel, diversifying into new product categories or expanding into untapped markets could help drive growth and mitigate risks associated with a single product line.
|
858 |
+
|
859 |
+
3. **E-commerce Strategy**: Improving the online shopping experience, investing in digital marketing, and leveraging data analytics to personalize customer interactions could boost online sales and customer loyalty.
|
860 |
+
|
861 |
+
4. **Innovation and R&D**: Continuously investing in research and development to stay ahead of competitors, introduce new technologies, and enhance product performance could help maintain Nike's competitive edge in the market.
|
862 |
+
|
863 |
+
5. **Brand Image and Reputation**: Strengthening brand image through effective marketing campaigns, community engagement, and transparent communication with stakeholders can help build trust and loyalty among consumers.
|
864 |
+
### Potential Cost-Saving Strategies for Nike to Increase Net Revenue in Q3 2024
|
865 |
+
|
866 |
+
1. **Supply Chain Optimization**: Streamlining the supply chain, reducing transportation costs, and improving inventory management can lead to significant cost savings for Nike.
|
867 |
+
|
868 |
+
2. **Operational Efficiency**: Implementing lean manufacturing practices, reducing waste, and optimizing production processes can help lower production costs and improve overall efficiency.
|
869 |
+
|
870 |
+
3. **Outsourcing Non-Core Functions**: Outsourcing non-core functions such as IT services, customer support, or logistics can help reduce overhead costs and focus resources on core business activities.
|
871 |
+
|
872 |
+
4. **Energy Efficiency**: Investing in energy-efficient technologies, renewable energy sources, and sustainable practices can lower utility costs and demonstrate a commitment to environmental responsibility.
|
873 |
+
|
874 |
+
5. **Negotiating Supplier Contracts**: Negotiating better terms with suppliers, leveraging economies of scale, and exploring alternative sourcing options can help lower procurement costs and improve margins.
|
875 |
+
|
876 |
+
By implementing these cost-saving strategies, Nike can improve its bottom line and increase net revenue in Q3 2024.
|
877 |
+
### Projected Market Trends for the Sports Apparel Industry in 2024
|
878 |
+
|
879 |
+
1. **Sustainable Fashion**: Consumers are increasingly demanding eco-friendly and sustainable products, leading to a rise in sustainable sportswear options in the market.
|
880 |
+
|
881 |
+
2. **Digital Transformation**: The sports apparel industry is expected to continue its shift towards digital platforms, with a focus on e-commerce, personalized shopping experiences, and digital marketing strategies.
|
882 |
+
|
883 |
+
3. **Athleisure Wear**: The trend of athleisure wear, which combines athletic and leisure clothing, is projected to remain popular in 2024 as consumers seek comfort and versatility in their apparel choices.
|
884 |
+
|
885 |
+
4. **Innovative Materials**: Advances in technology and material science are likely to drive the development of innovative fabrics and performance-enhancing materials in sports apparel, catering to the demand for high-quality and functional products.
|
886 |
+
|
887 |
+
5. **Health and Wellness Focus**: With a growing emphasis on health and wellness, sports apparel brands are expected to incorporate features that promote comfort, performance, and overall well-being in their products.
|
888 |
+
|
889 |
+
Overall, the sports apparel industry in 2024 is anticipated to be characterized by sustainability, digitalization, innovation, and a focus on consumer health and lifestyle trends.
|
890 |
+
### Current Successful Strategies Used by Nike's Competitors
|
891 |
+
|
892 |
+
1. **Adidas**: Adidas has been successful in leveraging collaborations with celebrities and designers to create limited-edition collections that generate hype and drive sales. They have also focused on sustainability initiatives, such as using recycled materials in their products, to appeal to environmentally conscious consumers.
|
893 |
+
|
894 |
+
2. **Under Armour**: Under Armour has differentiated itself by targeting performance-driven athletes and emphasizing technological innovation in their products. They have also invested heavily in digital marketing and e-commerce to reach a wider audience and enhance the customer shopping experience.
|
895 |
+
|
896 |
+
3. **Puma**: Puma has successfully capitalized on the athleisure trend by offering stylish and versatile sportswear that can be worn both in and out of the gym. They have also focused on building partnerships with influencers and sponsoring high-profile athletes to increase brand visibility and credibility.
|
897 |
+
|
898 |
+
4. **Lululemon**: Lululemon has excelled in creating a strong community around its brand, hosting events, classes, and collaborations to engage with customers beyond just selling products. They have also prioritized customer experience by offering personalized services and creating a seamless omnichannel shopping experience.
|
899 |
+
|
900 |
+
5. **New Balance**: New Balance has carved out a niche in the market by emphasizing quality craftsmanship, heritage, and authenticity in their products. They have also focused on customization and personalization options for customers, allowing them to create unique and tailored footwear and apparel.
|
901 |
+
|
902 |
+
Overall, Nike's competitors have found success through a combination of innovative product offerings, strategic marketing initiatives, and a focus on customer engagement and experience.
|
903 |
+
### Current and Projected Economic Conditions in Nike's Major Markets
|
904 |
+
|
905 |
+
1. **United States**: The United States, being one of Nike's largest markets, is currently experiencing moderate economic growth driven by consumer spending, low unemployment rates, and a rebound in manufacturing. However, uncertainties surrounding trade policies, inflation, and interest rates could impact consumer confidence and spending in the near future.
|
906 |
+
|
907 |
+
2. **China**: China remains a key market for Nike, with a growing middle class and increasing demand for sportswear and athletic footwear. Despite recent trade tensions with the U.S., China's economy is projected to continue expanding, driven by domestic consumption, infrastructure investments, and technological advancements.
|
908 |
+
|
909 |
+
3. **Europe**: Economic conditions in Europe vary across countries, with some experiencing sluggish growth due to Brexit uncertainties, political instability, and trade tensions. However, overall consumer confidence is improving, and the sports apparel market is expected to grow, driven by e-commerce and sustainability trends.
|
910 |
+
|
911 |
+
4. **Emerging Markets**: Nike's presence in emerging markets such as India, Brazil, and Southeast Asia provides opportunities for growth, given the rising disposable incomes, urbanization, and increasing focus on health and fitness. However, challenges such as currency fluctuations, regulatory changes, and competition from local brands could impact Nike's performance in these markets.
|
912 |
+
|
913 |
+
Overall, Nike's major markets exhibit a mix of opportunities and challenges, with economic conditions influenced by global trends, geopolitical factors, and consumer preferences."
|
914 |
+
### Current Consumer Preferences in the Sports Apparel Industry
|
915 |
+
|
916 |
+
1. **Sustainability**: Consumers are increasingly seeking eco-friendly and sustainable options in sports apparel, driving brands to focus on using recycled materials, reducing waste, and promoting ethical practices.
|
917 |
+
|
918 |
+
2. **Athleisure**: The trend of athleisure wear continues to be popular, with consumers looking for versatile and comfortable clothing that can be worn both during workouts and in everyday life.
|
919 |
+
|
920 |
+
3. **Performance and Functionality**: Consumers prioritize performance-enhancing features in sports apparel, such as moisture-wicking fabrics, breathable materials, and ergonomic designs that enhance comfort and mobility.
|
921 |
+
|
922 |
+
4. **Personalization**: Customization options, personalized fit, and unique design elements are appealing to consumers who seek individuality and exclusivity in their sports apparel choices.
|
923 |
+
|
924 |
+
5. **Brand Transparency**: Consumers value transparency in brand practices, including supply chain transparency, ethical sourcing, and clear communication on product quality and manufacturing processes.
|
925 |
+
|
926 |
+
Overall, consumer preferences in the sports apparel industry are shifting towards sustainability, versatility, performance, personalization, and transparency, influencing brand strategies and product offerings.
|
927 |
+
### Potential New Markets for Nike to Explore in 2024
|
928 |
+
|
929 |
+
1. **India**: With a growing population, increasing disposable incomes, and a rising interest in health and fitness, India presents a significant opportunity for Nike to expand its presence and tap into a large consumer base.
|
930 |
+
|
931 |
+
2. **Africa**: The African market, particularly countries with emerging economies and a young population, offers potential for Nike to introduce its products and capitalize on the growing demand for sportswear and athletic footwear.
|
932 |
+
|
933 |
+
3. **Middle East**: Countries in the Middle East, known for their luxury shopping destinations and a growing interest in sports and fitness activities, could be strategic markets for Nike to target and establish a strong foothold.
|
934 |
+
|
935 |
+
4. **Latin America**: Markets in Latin America, such as Brazil, Mexico, and Argentina, present opportunities for Nike to cater to a diverse consumer base and leverage the region's passion for sports and active lifestyles.
|
936 |
+
|
937 |
+
5. **Southeast Asia**: Rapid urbanization, increasing urban middle-class population, and a trend towards health and wellness in countries like Indonesia, Thailand, and Vietnam make Southeast Asia an attractive region for Nike to explore and expand its market reach.
|
938 |
+
|
939 |
+
By exploring these new markets in 2024, Nike can diversify its geographical presence, reach untapped consumer segments, and drive growth in emerging economies.
|
940 |
+
### Potential New Products or Services Nike Could Introduce in 2024
|
941 |
+
|
942 |
+
1. **Smart Apparel**: Nike could explore the integration of technology into its apparel, such as smart fabrics that monitor performance metrics, provide feedback, or enhance comfort during workouts.
|
943 |
+
|
944 |
+
2. **Athletic Accessories**: Introducing a line of athletic accessories like gym bags, water bottles, or fitness trackers could complement Nike's existing product offerings and provide additional value to customers.
|
945 |
+
|
946 |
+
3. **Customization Platforms**: Offering personalized design options for footwear and apparel through online customization platforms could appeal to consumers seeking unique and tailored products.
|
947 |
+
|
948 |
+
4. **Athletic Recovery Gear**: Developing recovery-focused products like compression wear, recovery sandals, or massage tools could cater to athletes and fitness enthusiasts looking to enhance post-workout recovery.
|
949 |
+
|
950 |
+
5. **Sustainable Collections**: Launching sustainable collections made from eco-friendly materials, recycled fabrics, or biodegradable components could align with consumer preferences for environmentally conscious products.
|
951 |
+
|
952 |
+
By introducing these new products or services in 2024, Nike can innovate its product portfolio, cater to evolving consumer needs, and differentiate itself in the competitive sports apparel market.
|
953 |
+
### Potential Marketing Strategies for Nike to Increase Revenue in Q3 2024
|
954 |
+
|
955 |
+
1. **Influencer Partnerships**: Collaborating with popular athletes, celebrities, or social media influencers to promote Nike products can help reach a wider audience and drive sales.
|
956 |
+
|
957 |
+
2. **Interactive Campaigns**: Launching interactive marketing campaigns, contests, or events that engage customers and create buzz around new product releases can generate excitement and increase brand visibility.
|
958 |
+
|
959 |
+
3. **Social Media Engagement**: Leveraging social media platforms to connect with consumers, share user-generated content, and respond to feedback can build brand loyalty and encourage repeat purchases.
|
960 |
+
|
961 |
+
4. **Localized Marketing**: Tailoring marketing messages, promotions, and product offerings to specific regions or target demographics can enhance relevance and appeal to diverse consumer groups.
|
962 |
+
|
963 |
+
5. **Customer Loyalty Programs**: Implementing loyalty programs, exclusive offers, or rewards for repeat customers can incentivize brand loyalty, increase retention rates, and drive higher lifetime customer value.
|
964 |
+
|
965 |
+
By employing these marketing strategies in Q3 2024, Nike can enhance its brand presence, attract new customers, and ultimately boost revenue growth.
|
966 |
+
|
967 |
+
|
968 |
+
|
969 |
+
|
970 |
+
|
971 |
+
|
972 |
+
|
973 |
+
|
974 |
+
|
975 |
+
|
976 |
+
|
docs/applications/compliance_swarm.md
ADDED
File without changes
|
docs/applications/customer_support.md
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## **Applications of Swarms: Revolutionizing Customer Support**
|
2 |
+
|
3 |
+
---
|
4 |
+
|
5 |
+
**Introduction**:
|
6 |
+
In today's fast-paced digital world, responsive and efficient customer support is a linchpin for business success. The introduction of AI-driven swarms in the customer support domain can transform the way businesses interact with and assist their customers. By leveraging the combined power of multiple AI agents working in concert, businesses can achieve unprecedented levels of efficiency, customer satisfaction, and operational cost savings.
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
### **The Benefits of Using Swarms for Customer Support:**
|
11 |
+
|
12 |
+
1. **24/7 Availability**: Swarms never sleep. Customers receive instantaneous support at any hour, ensuring constant satisfaction and loyalty.
|
13 |
+
|
14 |
+
2. **Infinite Scalability**: Whether it's ten inquiries or ten thousand, swarms can handle fluctuating volumes with ease, eliminating the need for vast human teams and minimizing response times.
|
15 |
+
|
16 |
+
3. **Adaptive Intelligence**: Swarms learn collectively, meaning that a solution found for one customer can be instantly applied to benefit all. This leads to constantly improving support experiences, evolving with every interaction.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
### **Features - Reinventing Customer Support**:
|
21 |
+
|
22 |
+
- **AI Inbox Monitor**: Continuously scans email inboxes, identifying and categorizing support requests for swift responses.
|
23 |
+
|
24 |
+
- **Intelligent Debugging**: Proactively helps customers by diagnosing and troubleshooting underlying issues.
|
25 |
+
|
26 |
+
- **Automated Refunds & Coupons**: Seamless integration with payment systems like Stripe allows for instant issuance of refunds or coupons if a problem remains unresolved.
|
27 |
+
|
28 |
+
- **Full System Integration**: Holistically connects with CRM, email systems, and payment portals, ensuring a cohesive and unified support experience.
|
29 |
+
|
30 |
+
- **Conversational Excellence**: With advanced LLMs (Language Model Transformers), the swarm agents can engage in natural, human-like conversations, enhancing customer comfort and trust.
|
31 |
+
|
32 |
+
- **Rule-based Operation**: By working with rule engines, swarms ensure that all actions adhere to company guidelines, ensuring consistent, error-free support.
|
33 |
+
|
34 |
+
- **Turing Test Ready**: Crafted to meet and exceed the Turing Test standards, ensuring that every customer interaction feels genuine and personal.
|
35 |
+
|
36 |
+
---
|
37 |
+
|
38 |
+
**Conclusion**:
|
39 |
+
Swarms are not just another technological advancement; they represent the future of customer support. Their ability to provide round-the-clock, scalable, and continuously improving support can redefine customer experience standards. By adopting swarms, businesses can stay ahead of the curve, ensuring unparalleled customer loyalty and satisfaction.
|
40 |
+
|
41 |
+
**Experience the future of customer support. Dive into the swarm revolution.**
|
42 |
+
|
docs/applications/discord.md
ADDED
@@ -0,0 +1,105 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Usage Documentation: Discord Bot with Advanced Features
|
2 |
+
|
3 |
+
---
|
4 |
+
|
5 |
+
### Overview:
|
6 |
+
|
7 |
+
This code provides a structure for a Discord bot with advanced features such as voice channel interactions, image generation, and text-based interactions using OpenAI models.
|
8 |
+
|
9 |
+
---
|
10 |
+
|
11 |
+
### Setup:
|
12 |
+
|
13 |
+
1. Ensure that the necessary libraries are installed:
|
14 |
+
```bash
|
15 |
+
pip install discord.py python-dotenv dalle3 invoke openai
|
16 |
+
```
|
17 |
+
|
18 |
+
2. Create a `.env` file in the same directory as your bot script and add the following:
|
19 |
+
```
|
20 |
+
DISCORD_TOKEN=your_discord_bot_token
|
21 |
+
STORAGE_SERVICE=your_storage_service_endpoint
|
22 |
+
SAVE_DIRECTORY=path_to_save_generated_images
|
23 |
+
```
|
24 |
+
|
25 |
+
---
|
26 |
+
|
27 |
+
### Bot Class and its Methods:
|
28 |
+
|
29 |
+
#### `__init__(self, agent, llm, command_prefix="!")`:
|
30 |
+
|
31 |
+
Initializes the bot with the given agent, language model (`llm`), and a command prefix (default is `!`).
|
32 |
+
|
33 |
+
#### `add_command(self, name, func)`:
|
34 |
+
|
35 |
+
Allows you to dynamically add new commands to the bot. The `name` is the command's name and `func` is the function to execute when the command is called.
|
36 |
+
|
37 |
+
#### `run(self)`:
|
38 |
+
|
39 |
+
Starts the bot using the `DISCORD_TOKEN` from the `.env` file.
|
40 |
+
|
41 |
+
---
|
42 |
+
|
43 |
+
### Commands:
|
44 |
+
|
45 |
+
1. **!greet**: Greets the user.
|
46 |
+
|
47 |
+
2. **!help_me**: Provides a list of commands and their descriptions.
|
48 |
+
|
49 |
+
3. **!join**: Joins the voice channel the user is in.
|
50 |
+
|
51 |
+
4. **!leave**: Leaves the voice channel the bot is currently in.
|
52 |
+
|
53 |
+
5. **!listen**: Starts listening to voice in the current voice channel and records the audio.
|
54 |
+
|
55 |
+
6. **!generate_image [prompt]**: Generates images based on the provided prompt using the DALL-E3 model.
|
56 |
+
|
57 |
+
7. **!send_text [text] [use_agent=True]**: Sends the provided text to the worker (either the agent or the LLM) and returns the response.
|
58 |
+
|
59 |
+
---
|
60 |
+
|
61 |
+
### Usage:
|
62 |
+
|
63 |
+
Initialize the `llm` (Language Learning Model) with your OpenAI API key:
|
64 |
+
|
65 |
+
```python
|
66 |
+
from swarm_models import OpenAIChat
|
67 |
+
|
68 |
+
llm = OpenAIChat(
|
69 |
+
openai_api_key="Your_OpenAI_API_Key",
|
70 |
+
temperature=0.5,
|
71 |
+
)
|
72 |
+
```
|
73 |
+
|
74 |
+
Initialize the bot with the `llm`:
|
75 |
+
|
76 |
+
```python
|
77 |
+
from apps.discord import Bot
|
78 |
+
|
79 |
+
bot = Bot(llm=llm)
|
80 |
+
```
|
81 |
+
|
82 |
+
Send a task to the bot:
|
83 |
+
|
84 |
+
```python
|
85 |
+
task = "What were the winning Boston Marathon times for the past 5 years (ending in 2022)? Generate a table of the year, name, country of origin, and times."
|
86 |
+
bot.send_text(task)
|
87 |
+
```
|
88 |
+
|
89 |
+
Start the bot:
|
90 |
+
|
91 |
+
```python
|
92 |
+
bot.run()
|
93 |
+
```
|
94 |
+
|
95 |
+
---
|
96 |
+
|
97 |
+
### Additional Notes:
|
98 |
+
|
99 |
+
- The bot makes use of the `dalle3` library for image generation. Ensure you have the model and necessary setup for it.
|
100 |
+
|
101 |
+
- For the storage service, you might want to integrate with a cloud service like Google Cloud Storage or AWS S3 to store and retrieve generated images. The given code assumes a method `.upload()` for the storage service to upload files.
|
102 |
+
|
103 |
+
- Ensure that you've granted the bot necessary permissions on Discord, especially if you want to use voice channel features.
|
104 |
+
|
105 |
+
- Handle API keys and tokens securely. Avoid hardcoding them directly into your code. Use environment variables or secure secret management tools.
|
docs/applications/enterprise.md
ADDED
File without changes
|
docs/applications/marketing_agencies.md
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## **Swarms in Marketing Agencies: A New Era of Automated Media Strategy**
|
2 |
+
|
3 |
+
---
|
4 |
+
|
5 |
+
### **Introduction**:
|
6 |
+
- Brief background on marketing agencies and their role in driving brand narratives and sales.
|
7 |
+
- Current challenges and pain points faced in media planning, placements, and budgeting.
|
8 |
+
- Introduction to the transformative potential of swarms in reshaping the marketing industry.
|
9 |
+
|
10 |
+
---
|
11 |
+
|
12 |
+
### **1. Fundamental Problem: Media Plan Creation**:
|
13 |
+
- **Definition**: The challenge of creating an effective media plan that resonates with a target audience and aligns with brand objectives.
|
14 |
+
|
15 |
+
- **Traditional Solutions and Their Shortcomings**: Manual brainstorming sessions, over-reliance on past strategies, and long turnaround times leading to inefficiency.
|
16 |
+
|
17 |
+
- **How Swarms Address This Problem**:
|
18 |
+
- **Benefit 1**: Automated Media Plan Generation – Swarms ingest branding summaries, objectives, and marketing strategies to generate media plans, eliminating guesswork and human error.
|
19 |
+
- **Real-world Application of Swarms**: The automation of media plans based on client briefs, including platform selections, audience targeting, and creative versions.
|
20 |
+
|
21 |
+
---
|
22 |
+
|
23 |
+
### **2. Fundamental Problem: Media Placements**:
|
24 |
+
- **Definition**: The tedious task of determining where ads will be placed, considering demographics, platform specifics, and more.
|
25 |
+
|
26 |
+
- **Traditional Solutions and Their Shortcomings**: Manual placement leading to possible misalignment with target audiences and brand objectives.
|
27 |
+
|
28 |
+
- **How Swarms Address This Problem**:
|
29 |
+
- **Benefit 2**: Precision Media Placements – Swarms analyze audience data and demographics to suggest the best placements, optimizing for conversions and brand reach.
|
30 |
+
- **Real-world Application of Swarms**: Automated selection of ad placements across platforms like Facebook, Google, and DSPs based on media plans.
|
31 |
+
|
32 |
+
---
|
33 |
+
|
34 |
+
### **3. Fundamental Problem: Budgeting**:
|
35 |
+
- **Definition**: Efficiently allocating and managing advertising budgets across multiple campaigns, platforms, and timeframes.
|
36 |
+
|
37 |
+
- **Traditional Solutions and Their Shortcomings**: Manual budgeting using tools like Excel, prone to errors, and inefficient shifts in allocations.
|
38 |
+
|
39 |
+
- **How Swarms Address This Problem**:
|
40 |
+
- **Benefit 3**: Intelligent Media Budgeting – Swarms enable dynamic budget allocation based on performance analytics, maximizing ROI.
|
41 |
+
- **Real-world Application of Swarms**: Real-time adjustments in budget allocations based on campaign performance, eliminating long waiting periods and manual recalculations.
|
42 |
+
|
43 |
+
---
|
44 |
+
|
45 |
+
### **Features**:
|
46 |
+
1. Automated Media Plan Generator: Input your objectives and receive a comprehensive media plan.
|
47 |
+
2. Precision Media Placement Tool: Ensure your ads appear in the right places to the right people.
|
48 |
+
3. Dynamic Budget Allocation: Maximize ROI with real-time budget adjustments.
|
49 |
+
4. Integration with Common Tools: Seamless integration with tools like Excel and APIs for exporting placements.
|
50 |
+
5. Conversational Platform: A suite of tools built for modern marketing agencies, bringing all tasks under one umbrella.
|
51 |
+
|
52 |
+
---
|
53 |
+
|
54 |
+
### **Testimonials**:
|
55 |
+
- "Swarms have completely revolutionized our media planning process. What used to take weeks now takes mere hours." - *Senior Media Strategist, Top-tier Marketing Agency*
|
56 |
+
- "The precision with which we can place ads now is unprecedented. It's like having a crystal ball for marketing!" - *Campaign Manager, Global Advertising Firm*
|
57 |
+
|
58 |
+
---
|
59 |
+
|
60 |
+
### **Conclusion**:
|
61 |
+
- Reiterate the immense potential of swarms in revolutionizing media planning, placements, and budgeting for marketing agencies.
|
62 |
+
- Call to action: For marketing agencies looking to step into the future and leave manual inefficiencies behind, swarms are the answer.
|
63 |
+
|
64 |
+
---
|
docs/assets/css/extra.css
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
/* * Further customization as needed */ */
|
2 |
+
|
3 |
+
.md-typeset__table {
|
4 |
+
min-width: 100%;
|
5 |
+
}
|
6 |
+
|
7 |
+
.md-typeset table:not([class]) {
|
8 |
+
display: table;
|
9 |
+
}
|
10 |
+
|
11 |
+
/* Dark mode
|
12 |
+
[data-md-color-scheme="slate"] {
|
13 |
+
--md-default-bg-color: black;
|
14 |
+
}
|
15 |
+
|
16 |
+
.header__ellipsis {
|
17 |
+
color: black;
|
18 |
+
}
|
19 |
+
|
20 |
+
.md-copyright__highlight {
|
21 |
+
color: black;
|
22 |
+
}
|
23 |
+
|
24 |
+
|
25 |
+
.md-header.md-header--shadow {
|
26 |
+
color: black;
|
27 |
+
} */
|
docs/assets/img/SwarmsLogoIcon.png
ADDED
docs/assets/img/agent_def.png
ADDED
docs/assets/img/docs/query-plan-mini.png
ADDED
docs/assets/img/docs/query-plan.png
ADDED
docs/assets/img/reliabilitythrough.png
ADDED
docs/assets/img/swarmbanner.png
ADDED
docs/assets/img/swarms-logo.png
ADDED
docs/assets/img/swarmsbanner.png
ADDED
docs/assets/img/tools/output.png
ADDED
docs/clusterops/reference.md
ADDED
@@ -0,0 +1,334 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ClusterOps API Reference
|
2 |
+
|
3 |
+
ClusterOps is a Python library for managing and executing tasks across CPU and GPU resources in a distributed computing environment. It provides functions for resource discovery, task execution, and performance monitoring.
|
4 |
+
|
5 |
+
## Installation
|
6 |
+
|
7 |
+
```bash
|
8 |
+
|
9 |
+
$ pip3 install clusterops
|
10 |
+
|
11 |
+
```
|
12 |
+
|
13 |
+
## Table of Contents
|
14 |
+
1. [CPU Operations](#cpu-operations)
|
15 |
+
2. [GPU Operations](#gpu-operations)
|
16 |
+
3. [Utility Functions](#utility-functions)
|
17 |
+
4. [Resource Monitoring](#resource-monitoring)
|
18 |
+
|
19 |
+
## CPU Operations
|
20 |
+
|
21 |
+
### `list_available_cpus()`
|
22 |
+
|
23 |
+
Lists all available CPU cores.
|
24 |
+
|
25 |
+
#### Returns
|
26 |
+
| Type | Description |
|
27 |
+
|------|-------------|
|
28 |
+
| `List[int]` | A list of available CPU core indices. |
|
29 |
+
|
30 |
+
#### Raises
|
31 |
+
| Exception | Description |
|
32 |
+
|-----------|-------------|
|
33 |
+
| `RuntimeError` | If no CPUs are found. |
|
34 |
+
|
35 |
+
#### Example
|
36 |
+
```python
|
37 |
+
from clusterops import list_available_cpus
|
38 |
+
|
39 |
+
available_cpus = list_available_cpus()
|
40 |
+
print(f"Available CPU cores: {available_cpus}")
|
41 |
+
```
|
42 |
+
|
43 |
+
### `execute_on_cpu(cpu_id: int, func: Callable, *args: Any, **kwargs: Any) -> Any`
|
44 |
+
|
45 |
+
Executes a callable on a specific CPU.
|
46 |
+
|
47 |
+
#### Parameters
|
48 |
+
| Name | Type | Description |
|
49 |
+
|------|------|-------------|
|
50 |
+
| `cpu_id` | `int` | The CPU core to run the function on. |
|
51 |
+
| `func` | `Callable` | The function to be executed. |
|
52 |
+
| `*args` | `Any` | Arguments for the callable. |
|
53 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
54 |
+
|
55 |
+
#### Returns
|
56 |
+
| Type | Description |
|
57 |
+
|------|-------------|
|
58 |
+
| `Any` | The result of the function execution. |
|
59 |
+
|
60 |
+
#### Raises
|
61 |
+
| Exception | Description |
|
62 |
+
|-----------|-------------|
|
63 |
+
| `ValueError` | If the CPU core specified is invalid. |
|
64 |
+
| `RuntimeError` | If there is an error executing the function on the CPU. |
|
65 |
+
|
66 |
+
#### Example
|
67 |
+
```python
|
68 |
+
from clusterops import execute_on_cpu
|
69 |
+
|
70 |
+
def sample_task(n: int) -> int:
|
71 |
+
return n * n
|
72 |
+
|
73 |
+
result = execute_on_cpu(0, sample_task, 10)
|
74 |
+
print(f"Result of sample task on CPU 0: {result}")
|
75 |
+
```
|
76 |
+
|
77 |
+
### `execute_with_cpu_cores(core_count: int, func: Callable, *args: Any, **kwargs: Any) -> Any`
|
78 |
+
|
79 |
+
Executes a callable using a specified number of CPU cores.
|
80 |
+
|
81 |
+
#### Parameters
|
82 |
+
| Name | Type | Description |
|
83 |
+
|------|------|-------------|
|
84 |
+
| `core_count` | `int` | The number of CPU cores to run the function on. |
|
85 |
+
| `func` | `Callable` | The function to be executed. |
|
86 |
+
| `*args` | `Any` | Arguments for the callable. |
|
87 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
88 |
+
|
89 |
+
#### Returns
|
90 |
+
| Type | Description |
|
91 |
+
|------|-------------|
|
92 |
+
| `Any` | The result of the function execution. |
|
93 |
+
|
94 |
+
#### Raises
|
95 |
+
| Exception | Description |
|
96 |
+
|-----------|-------------|
|
97 |
+
| `ValueError` | If the number of CPU cores specified is invalid or exceeds available cores. |
|
98 |
+
| `RuntimeError` | If there is an error executing the function on the specified CPU cores. |
|
99 |
+
|
100 |
+
#### Example
|
101 |
+
```python
|
102 |
+
from clusterops import execute_with_cpu_cores
|
103 |
+
|
104 |
+
def parallel_task(n: int) -> int:
|
105 |
+
return sum(range(n))
|
106 |
+
|
107 |
+
result = execute_with_cpu_cores(4, parallel_task, 1000000)
|
108 |
+
print(f"Result of parallel task using 4 CPU cores: {result}")
|
109 |
+
```
|
110 |
+
|
111 |
+
## GPU Operations
|
112 |
+
|
113 |
+
### `list_available_gpus() -> List[str]`
|
114 |
+
|
115 |
+
Lists all available GPUs.
|
116 |
+
|
117 |
+
#### Returns
|
118 |
+
| Type | Description |
|
119 |
+
|------|-------------|
|
120 |
+
| `List[str]` | A list of available GPU names. |
|
121 |
+
|
122 |
+
#### Raises
|
123 |
+
| Exception | Description |
|
124 |
+
|-----------|-------------|
|
125 |
+
| `RuntimeError` | If no GPUs are found. |
|
126 |
+
|
127 |
+
#### Example
|
128 |
+
```python
|
129 |
+
from clusterops import list_available_gpus
|
130 |
+
|
131 |
+
available_gpus = list_available_gpus()
|
132 |
+
print(f"Available GPUs: {available_gpus}")
|
133 |
+
```
|
134 |
+
|
135 |
+
### `select_best_gpu() -> Optional[int]`
|
136 |
+
|
137 |
+
Selects the GPU with the most free memory.
|
138 |
+
|
139 |
+
#### Returns
|
140 |
+
| Type | Description |
|
141 |
+
|------|-------------|
|
142 |
+
| `Optional[int]` | The GPU ID of the best available GPU, or None if no GPUs are available. |
|
143 |
+
|
144 |
+
#### Example
|
145 |
+
```python
|
146 |
+
from clusterops import select_best_gpu
|
147 |
+
|
148 |
+
best_gpu = select_best_gpu()
|
149 |
+
if best_gpu is not None:
|
150 |
+
print(f"Best GPU for execution: GPU {best_gpu}")
|
151 |
+
else:
|
152 |
+
print("No GPUs available")
|
153 |
+
```
|
154 |
+
|
155 |
+
### `execute_on_gpu(gpu_id: int, func: Callable, *args: Any, **kwargs: Any) -> Any`
|
156 |
+
|
157 |
+
Executes a callable on a specific GPU using Ray.
|
158 |
+
|
159 |
+
#### Parameters
|
160 |
+
| Name | Type | Description |
|
161 |
+
|------|------|-------------|
|
162 |
+
| `gpu_id` | `int` | The GPU to run the function on. |
|
163 |
+
| `func` | `Callable` | The function to be executed. |
|
164 |
+
| `*args` | `Any` | Arguments for the callable. |
|
165 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
166 |
+
|
167 |
+
#### Returns
|
168 |
+
| Type | Description |
|
169 |
+
|------|-------------|
|
170 |
+
| `Any` | The result of the function execution. |
|
171 |
+
|
172 |
+
#### Raises
|
173 |
+
| Exception | Description |
|
174 |
+
|-----------|-------------|
|
175 |
+
| `ValueError` | If the GPU index is invalid. |
|
176 |
+
| `RuntimeError` | If there is an error executing the function on the GPU. |
|
177 |
+
|
178 |
+
#### Example
|
179 |
+
```python
|
180 |
+
from clusterops import execute_on_gpu
|
181 |
+
|
182 |
+
def gpu_task(n: int) -> int:
|
183 |
+
return n ** 2
|
184 |
+
|
185 |
+
result = execute_on_gpu(0, gpu_task, 10)
|
186 |
+
print(f"Result of GPU task on GPU 0: {result}")
|
187 |
+
```
|
188 |
+
|
189 |
+
### `execute_on_multiple_gpus(gpu_ids: List[int], func: Callable, all_gpus: bool = False, timeout: float = None, *args: Any, **kwargs: Any) -> List[Any]`
|
190 |
+
|
191 |
+
Executes a callable across multiple GPUs using Ray.
|
192 |
+
|
193 |
+
#### Parameters
|
194 |
+
| Name | Type | Description |
|
195 |
+
|------|------|-------------|
|
196 |
+
| `gpu_ids` | `List[int]` | The list of GPU IDs to run the function on. |
|
197 |
+
| `func` | `Callable` | The function to be executed. |
|
198 |
+
| `all_gpus` | `bool` | Whether to use all available GPUs (default: False). |
|
199 |
+
| `timeout` | `float` | Timeout for the execution in seconds (default: None). |
|
200 |
+
| `*args` | `Any` | Arguments for the callable. |
|
201 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
202 |
+
|
203 |
+
#### Returns
|
204 |
+
| Type | Description |
|
205 |
+
|------|-------------|
|
206 |
+
| `List[Any]` | A list of results from the execution on each GPU. |
|
207 |
+
|
208 |
+
#### Raises
|
209 |
+
| Exception | Description |
|
210 |
+
|-----------|-------------|
|
211 |
+
| `ValueError` | If any GPU index is invalid. |
|
212 |
+
| `RuntimeError` | If there is an error executing the function on the GPUs. |
|
213 |
+
|
214 |
+
#### Example
|
215 |
+
```python
|
216 |
+
from clusterops import execute_on_multiple_gpus
|
217 |
+
|
218 |
+
def multi_gpu_task(n: int) -> int:
|
219 |
+
return n ** 3
|
220 |
+
|
221 |
+
results = execute_on_multiple_gpus([0, 1], multi_gpu_task, 5)
|
222 |
+
print(f"Results of multi-GPU task: {results}")
|
223 |
+
```
|
224 |
+
|
225 |
+
### `distributed_execute_on_gpus(gpu_ids: List[int], func: Callable, *args: Any, **kwargs: Any) -> List[Any]`
|
226 |
+
|
227 |
+
Executes a callable across multiple GPUs and nodes using Ray's distributed task scheduling.
|
228 |
+
|
229 |
+
#### Parameters
|
230 |
+
| Name | Type | Description |
|
231 |
+
|------|------|-------------|
|
232 |
+
| `gpu_ids` | `List[int]` | The list of GPU IDs across nodes to run the function on. |
|
233 |
+
| `func` | `Callable` | The function to be executed. |
|
234 |
+
| `*args` | `Any` | Arguments for the callable. |
|
235 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
236 |
+
|
237 |
+
#### Returns
|
238 |
+
| Type | Description |
|
239 |
+
|------|-------------|
|
240 |
+
| `List[Any]` | A list of results from the execution on each GPU. |
|
241 |
+
|
242 |
+
#### Example
|
243 |
+
```python
|
244 |
+
from clusterops import distributed_execute_on_gpus
|
245 |
+
|
246 |
+
def distributed_task(n: int) -> int:
|
247 |
+
return n ** 4
|
248 |
+
|
249 |
+
results = distributed_execute_on_gpus([0, 1, 2, 3], distributed_task, 3)
|
250 |
+
print(f"Results of distributed GPU task: {results}")
|
251 |
+
```
|
252 |
+
|
253 |
+
## Utility Functions
|
254 |
+
|
255 |
+
### `retry_with_backoff(func: Callable, retries: int = RETRY_COUNT, delay: float = RETRY_DELAY, *args: Any, **kwargs: Any) -> Any`
|
256 |
+
|
257 |
+
Retries a callable function with exponential backoff in case of failure.
|
258 |
+
|
259 |
+
#### Parameters
|
260 |
+
| Name | Type | Description |
|
261 |
+
|------|------|-------------|
|
262 |
+
| `func` | `Callable` | The function to execute with retries. |
|
263 |
+
| `retries` | `int` | Number of retries (default: RETRY_COUNT from env). |
|
264 |
+
| `delay` | `float` | Delay between retries in seconds (default: RETRY_DELAY from env). |
|
265 |
+
| `*args` | `Any` | Arguments for the callable. |
|
266 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
267 |
+
|
268 |
+
#### Returns
|
269 |
+
| Type | Description |
|
270 |
+
|------|-------------|
|
271 |
+
| `Any` | The result of the function execution. |
|
272 |
+
|
273 |
+
#### Raises
|
274 |
+
| Exception | Description |
|
275 |
+
|-----------|-------------|
|
276 |
+
| `Exception` | After all retries fail. |
|
277 |
+
|
278 |
+
#### Example
|
279 |
+
```python
|
280 |
+
from clusterops import retry_with_backoff
|
281 |
+
|
282 |
+
def unstable_task():
|
283 |
+
# Simulating an unstable task that might fail
|
284 |
+
import random
|
285 |
+
if random.random() < 0.5:
|
286 |
+
raise Exception("Task failed")
|
287 |
+
return "Task succeeded"
|
288 |
+
|
289 |
+
result = retry_with_backoff(unstable_task, retries=5, delay=1)
|
290 |
+
print(f"Result of unstable task: {result}")
|
291 |
+
```
|
292 |
+
|
293 |
+
## Resource Monitoring
|
294 |
+
|
295 |
+
### `monitor_resources()`
|
296 |
+
|
297 |
+
Continuously monitors CPU and GPU resources and logs alerts when thresholds are crossed.
|
298 |
+
|
299 |
+
#### Example
|
300 |
+
```python
|
301 |
+
from clusterops import monitor_resources
|
302 |
+
|
303 |
+
# Start monitoring resources
|
304 |
+
monitor_resources()
|
305 |
+
```
|
306 |
+
|
307 |
+
### `profile_execution(func: Callable, *args: Any, **kwargs: Any) -> Any`
|
308 |
+
|
309 |
+
Profiles the execution of a task, collecting metrics like execution time and CPU/GPU usage.
|
310 |
+
|
311 |
+
#### Parameters
|
312 |
+
| Name | Type | Description |
|
313 |
+
|------|------|-------------|
|
314 |
+
| `func` | `Callable` | The function to profile. |
|
315 |
+
| `*args` | `Any` | Arguments for the callable. |
|
316 |
+
| `**kwargs` | `Any` | Keyword arguments for the callable. |
|
317 |
+
|
318 |
+
#### Returns
|
319 |
+
| Type | Description |
|
320 |
+
|------|-------------|
|
321 |
+
| `Any` | The result of the function execution along with the collected metrics. |
|
322 |
+
|
323 |
+
#### Example
|
324 |
+
```python
|
325 |
+
from clusterops import profile_execution
|
326 |
+
|
327 |
+
def cpu_intensive_task():
|
328 |
+
return sum(i*i for i in range(10000000))
|
329 |
+
|
330 |
+
result = profile_execution(cpu_intensive_task)
|
331 |
+
print(f"Result of profiled task: {result}")
|
332 |
+
```
|
333 |
+
|
334 |
+
This API reference provides a comprehensive overview of the ClusterOps library's main functions, their parameters, return values, and usage examples. It should help users understand and utilize the library effectively for managing and executing tasks across CPU and GPU resources in a distributed computing environment.
|
docs/corporate/2024_2025_goals.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# **Swarms Goals & Milestone Tracking: A Vision for 2024 and Beyond**
|
2 |
+
|
3 |
+
As we propel Swarms into a new frontier, we’ve set ambitious yet achievable goals for the coming years that will solidify Swarms as a leader in multi-agent
|
4 |
+
orchestration. This document outlines our vision, the goals for 2024 and 2025, and how we track our progress through meticulously designed milestones and metrics.
|
5 |
+
|
6 |
+
## **Our Vision: The Agentic Ecosystem**
|
7 |
+
|
8 |
+
We envision an ecosystem where agents are pervasive and serve as integral collaborators in business processes, daily life, and complex problem-solving. By leveraging
|
9 |
+
the collective intelligence of swarms, we believe we can achieve massive gains in productivity, scalability, and impact. Our target is to establish the Swarms platform as the go-to environment for deploying and managing agents at an unprecedented scale—making agents as common and indispensable as mobile apps are today. This future
|
10 |
+
will see agents integrated into nearly every digital interaction, creating a seamless extension of human capability and reducing the cognitive load on individuals and organizations.
|
11 |
+
|
12 |
+
We believe that *agents* will transition from being simple tools to becoming full-fledged partners that can understand user needs, predict outcomes, and adapt to
|
13 |
+
changes dynamically. Our vision is not just about increasing numbers; it’s about building a smarter, more interconnected agentic ecosystem where every agent has a purpose and contributes to a collective intelligence that continuously evolves. By cultivating a diverse array of agents capable of handling various specialized tasks, we aim to create an environment in which these digital collaborators function as a cohesive whole—one that can amplify human ingenuity and productivity beyond current limits.
|
14 |
+
|
15 |
+
## **Goals for 2024 and 2025**
|
16 |
+
|
17 |
+
To achieve our vision, we have laid out a structured growth trajectory for Swarms, driven by clear numerical targets:
|
18 |
+
|
19 |
+
1. **End of 2024: 500 Million Agents**
|
20 |
+
Currently, our platform hosts **45 million agents**. By the end of 2024, our goal is to reach **500 million agents** deployed on Swarms. This means achieving sustained exponential growth, which will require doubling or even tripling the total number of agents roughly **every month** from now until December 2024. Such growth will necessitate not only scaling infrastructure but also improving the ease with which users can develop and deploy agents, expanding educational resources, and fostering a vibrant community that drives innovation in agent design. To achieve this milestone, we plan to invest heavily in making our platform user-friendly, including simplifying onboarding processes and providing extensive educational content. Additionally, we aim to build out our infrastructure to support the necessary scalability and ensure the seamless operation of a growing number of agents. Beyond merely scaling in numbers, we are also focused on increasing the diversity of tasks that agents can perform, thereby enhancing the practical value of deploying agents on Swarms.
|
21 |
+
|
22 |
+
2. **End of 2025: 10 Billion+ Agents**
|
23 |
+
The long-term vision extends further to reach **10 billion agents** by the end of 2025. This ambitious goal reflects not only the organic growth of our user base but
|
24 |
+
also the increasing role of swarms in business applications, personal projects, and global problem-solving initiatives. This goal requires continuous monthly
|
25 |
+
doubling of agents and a clear roadmap of user engagement and deployment. By scaling to this level, we envision Swarms as a cornerstone of automation and productivity enhancement, where agents autonomously manage everything from mundane tasks to sophisticated strategic decisions, effectively enhancing human capabilities. This expansion will rely on the development of a robust ecosystem in which users can easily create, share, and enhance agents. We will foster partnerships with industries that can benefit from scalable agentic solutions—spanning healthcare, finance, education, and beyond. Our strategy includes developing domain-specific templates and specialized agents that cater to niche needs, thereby making Swarms an indispensable solution for businesses and individuals alike.
|
26 |
+
|
27 |
+
## **Tracking Progress: The Power of Metrics**
|
28 |
+
|
29 |
+
Achieving these goals is not just about reaching numerical targets but ensuring that our users are deriving tangible value from Swarms and deploying agents effectively. To measure success, we’ve defined several key performance indicators (KPIs) and milestones:
|
30 |
+
|
31 |
+
### 1. Growth in Agent Deployment
|
32 |
+
|
33 |
+
The **number of agents** deployed per month will be our primary growth metric. With our goal of **doubling agent count every month**, this metric serves as an overall health indicator for platform adoption and usage. Growth in deployment indicates that our platform is attracting users who see value in creating and deploying agents to solve diverse challenges.
|
34 |
+
|
35 |
+
**Key Milestones:**
|
36 |
+
|
37 |
+
- **November 2024**: Surpass 250 million agents.
|
38 |
+
|
39 |
+
- **December 2024**: Reach 500 million agents.
|
40 |
+
|
41 |
+
- **June 2025**: Break the 5 billion agents mark.
|
42 |
+
|
43 |
+
- **December 2025**: Hit 10 billion agents.
|
44 |
+
|
45 |
+
|
46 |
+
To accomplish this, we must continually expand our infrastructure, maintain scalability, and create a seamless user onboarding process. We’ll ensure that adding agents is frictionless and that our platform can accommodate this rapid growth. By integrating advanced orchestration capabilities, we will enable agents to form more complex collaborations and achieve tasks that previously seemed out of reach. Furthermore, we will develop analytics tools to track the success and efficiency of these agents, giving users real-time feedback to optimize their deployment strategies.
|
47 |
+
|
48 |
+
|
49 |
+
### 2. Agents Deployed Per User: Engagement Indicator
|
50 |
+
|
51 |
+
A core belief of Swarms is that agents are here to make life easier for their users—whether it’s automating mundane tasks, handling complex workflows, or enhancing creative endeavors. Therefore, we measure the **number of agents deployed per user per month** as a key metric for engagement. Tracking this metric allows us to understand how effectively our users are utilizing the platform, and how deeply agents are becoming embedded into their workflows.
|
52 |
+
|
53 |
+
This metric ensures that users aren’t just joining Swarms, but they are actively building and deploying agents to solve real problems. Our milestone for engagement is to see **increasing growth in agents deployed per user** month over month, which indicates a deeper integration of Swarms into daily workflows and business processes. We want our users to view Swarms as their go-to solution for any problem they face, which means ensuring that agents are providing real, tangible benefits.
|
54 |
+
|
55 |
+
|
56 |
+
**Key Milestones:**
|
57 |
+
|
58 |
+
- **November 2024**: Achieve an average of 20 agents deployed per user each month.
|
59 |
+
|
60 |
+
- **June 2025**: Target 100-200+ agents deployed per user.
|
61 |
+
|
62 |
+
|
63 |
+
To drive these numbers, we plan to improve user support, enhance educational materials, host workshops, and create an environment that empowers users to deploy agents for increasingly complex use-cases. Additionally, we will introduce templates and pre-built agents that users can customize, reducing the barriers to entry and enabling
|
64 |
+
rapid deployment for new users. We are also developing gamified elements that reward users for deploying more agents and achieving milestones, fostering a competitive and engaging community atmosphere.
|
65 |
+
|
66 |
+
### 3. Active vs. Inactive Agents: Measuring Churn
|
67 |
+
|
68 |
+
The **number of inactive agents per user** is an essential metric for understanding our **churn rate**. An agent is considered inactive when it remains undeployed or unused for a prolonged period, indicating that it’s no longer delivering value to the user. Churn metrics provide valuable insights into the effectiveness of our agents and highlight areas where improvements are needed.
|
69 |
+
|
70 |
+
We aim to **minimize the number of inactive agents**, as this will be a direct reflection of how well our agents are designed, integrated, and supported. A low churn rate means that users are finding long-term utility in their agents, which is key to our mission. Our platform’s success depends on users consistently deploying agents
|
71 |
+
that remain active and valuable over time.
|
72 |
+
|
73 |
+
**Key Milestones:**
|
74 |
+
|
75 |
+
- **December 2024**: Ensure that no more than **30%** of deployed agents are inactive.
|
76 |
+
|
77 |
+
- **December 2025**: Aim for **10%** or lower, reflecting strong agent usefulness and consistent platform value delivery.
|
78 |
+
|
79 |
+
|
80 |
+
Reducing churn will require proactive measures, such as automated notifications to users about inactive agents, recommending potential uses, and implementing agent retraining features to enhance their adaptability over time. Educating users on prompting engineering, tool engineering, and RAG engineering also helps decrease these numbers as the number of inactive agents is evident that the user is not automating a business operation with that agent. We will also integrate machine learning models to predict agent inactivity and take corrective actions before agents become dormant. By offering personalized recommendations to users on how to enhance or repurpose inactive agents, we hope to ensure that all deployed agents are actively contributing value.
|
81 |
+
|
82 |
+
## **Milestones and Success Criteria**
|
83 |
+
|
84 |
+
To reach these ambitious goals, we have broken our roadmap down into a series of actionable milestones:
|
85 |
+
|
86 |
+
1. **Infrastructure Scalability (Q1 2025)**
|
87 |
+
We will work on ensuring that our backend infrastructure can handle the scale required to reach 500 million agents by the end of 2024. This includes expanding server capacity, improving agent orchestration capabilities, and ensuring low latency across deployments. We will also focus on enhancing our database management systems to ensure efficient storage and retrieval of agent data, enabling seamless operation at a massive scale. Our infrastructure roadmap also includes implementing advanced load balancing techniques and predictive scaling mechanisms to ensure high availability and reliability.
|
88 |
+
|
89 |
+
2. **Improved User Experience (Q2 2025)**
|
90 |
+
To encourage agent deployment and reduce churn, we will introduce new onboarding flows, agent-building wizards, and intuitive user interfaces. We will also implement
|
91 |
+
in-depth tutorials and documentation to simplify agent creation for new users. By making agent-building accessible even to those without programming expertise, we
|
92 |
+
will open the doors to a broader audience and drive exponential growth in the number of agents deployed. Additionally, we will integrate AI-driven suggestions and
|
93 |
+
contextual help to assist users at every step of the process, making the platform as intuitive as possible.
|
94 |
+
|
95 |
+
3. **Agent Marketplace (Q3 2025)**
|
96 |
+
Launching the **Swarms Marketplace** for agents, prompts, and tools will allow users to share, discover, and even monetize their agents. This marketplace will be a crucial driver in both increasing the number of agents deployed and reducing inactive agents, as it will create an ecosystem of continuously evolving and highly useful agents. Users will have the opportunity to browse agents that others have developed, which can serve as inspiration or as a starting point for their own projects. We will also introduce ratings, reviews, and community feedback mechanisms to ensure that the most effective agents are highlighted and accessible.
|
97 |
+
|
98 |
+
4. **Community Engagement and Swarms Education (Ongoing)**
|
99 |
+
Workshops, webinars, and events will be conducted throughout 2024 and 2025 to engage new users and educate them on building effective agents. The goal is to ensure that every user becomes proficient in deploying swarms of agents for meaningful tasks. We will foster an active community where users can exchange ideas, get help, and collaborate on projects, ultimately driving forward the growth of the Swarms ecosystem. We also plan to establish a mentor program where experienced users can guide newcomers, helping them get up to speed more quickly and successfully deploy agents.
|
100 |
+
|
101 |
+
## **Actionable Strategies for Goal Achievement**
|
102 |
+
|
103 |
+
**1. Developer Incentives**
|
104 |
+
One of our most important strategies will be the introduction of developer incentives. By providing rewards for creating agents, we foster an environment of creativity and encourage rapid growth in the number of useful agents on the platform. We will host hackathons, contests, and provide financial incentives to developers whose agents provide substantial value to the community. Additionally, we plan to create a tiered rewards system that acknowledges developers for the number of active deployments and the utility of their agents, motivating continuous improvement and innovation.
|
105 |
+
|
106 |
+
**2. Strategic Partnerships**
|
107 |
+
We plan to form partnerships with major technology providers and industry players to scale Swarms adoption. Integrating Swarms into existing business software and industrial processes will drive significant growth in agent numbers and usage. These partnerships will allow Swarms to become embedded into existing workflows, making it easier for users to understand the value and immediately apply agents to solve real-world challenges. We are also targeting partnerships with educational
|
108 |
+
institutions to provide Swarms as a learning platform for AI, encouraging students and researchers to contribute to our growing ecosystem.
|
109 |
+
|
110 |
+
**3. User Feedback Loop**
|
111 |
+
To ensure we are on track, a continuous feedback loop with our user community will help us understand what agents are effective, which require improvements, and where we need to invest our resources to maximize engagement. Users’ experiences will shape our platform evolution. We will implement regular surveys, feedback forms, and user interviews to gather insights, and use this data to drive iterative development that is directly aligned with user needs. In addition, we will create an open feature request forum where users can vote on the most important features they want to see, ensuring that we are prioritizing our community’s needs.
|
112 |
+
|
113 |
+
**4. Marketing and Awareness Campaigns**
|
114 |
+
Strategic campaigns to showcase the power of swarms in specific industries will highlight the versatility and impact of our agents. We plan to create case studies demonstrating how swarms solve complex problems in marketing, finance, customer service, and other verticals, and use these to attract a wider audience. Our content marketing strategy will include blogs, video tutorials, and success stories to help potential users visualize the transformative power of Swarms. We will also leverage social media campaigns and influencer partnerships to reach a broader audience and generate buzz around Swarms’ capabilities.
|
115 |
+
|
116 |
+
**5. Educational Initiatives**
|
117 |
+
To lower the barrier to entry for new users, we will invest heavily in educational content. This includes video tutorials, comprehensive guides, and in-platform
|
118 |
+
learning modules. By making the learning process easy and engaging, we ensure that users quickly become proficient in creating and deploying agents, thereby increasing user satisfaction and reducing churn. A well-educated user base will lead to more agents being deployed effectively, contributing to our overall growth targets. We are
|
119 |
+
also developing certification programs for users and developers, providing a structured pathway to become proficient in Swarms technology and gain recognition for their skills.
|
120 |
+
|
121 |
+
## **The Path Ahead: Building Towards 10 Billion Agents**
|
122 |
+
|
123 |
+
To achieve our vision of **10 billion agents** by the end of 2025, it’s critical that we maintain an aggressive growth strategy while ensuring that agents are providing real value to users. This requires a deep focus on **scalability, community growth, and user-centric development**. It also demands a continuous feedback loop where
|
124 |
+
insights from agent deployments and user interactions drive platform evolution. By creating an environment where agents are easy to develop, share, and integrate, we will achieve sustainable growth that benefits not just Swarms, but the broader AI community.
|
125 |
+
|
126 |
+
We envision swarms as a catalyst for *democratizing access to AI*. By enabling users across industries—from healthcare to education to manufacturing—to deploy agents that handle specialized tasks, we empower individuals and organizations to focus on creative, strategic endeavors rather than repetitive operational tasks. The journey to 10 billion agents is not just about scale; it’s about creating *meaningful and effective automation* that transforms how work gets done. We believe that Swarms will ultimately reshape industries by making sophisticated automation accessible to all, driving a shift toward higher productivity and innovation.
|
127 |
+
|
128 |
+
## **Community and Culture**
|
129 |
+
|
130 |
+
Swarms will also be emphasizing the **community aspect**, building a **culture of collaboration** among users, developers, and businesses. By fostering open communication and enabling the sharing of agents, we encourage **knowledge transfer** and **network effects**, which help drive overall growth. Our goal is to create an environment where agents not only work individually but evolve as a collective intelligence network—working towards a **post-scarcity civilization** where every problem
|
131 |
+
can be tackled by the right combination of swarms.
|
132 |
+
|
133 |
+
We see the community as the heartbeat of Swarms, driving innovation, providing support, and expanding the use-cases for agents. Whether it’s through forums, community
|
134 |
+
events, or user-generated content, we want Swarms to be the hub where people come together to solve the most pressing challenges of our time. By empowering our users
|
135 |
+
and encouraging collaboration, we can ensure that the platform continuously evolves and adapts to new needs and opportunities. Additionally, we plan to establish local Swarms chapters worldwide, where users can meet in person to share knowledge, collaborate on projects, and build lasting relationships that strengthen the global Swarms community.
|
136 |
+
|
137 |
+
# **Conclusion: Measuring Success One Milestone at a Time**
|
138 |
+
|
139 |
+
The **path to 500 million agents by the end of 2024** and **10 billion agents by the end of 2025** is paved with strategic growth, infrastructure resilience, and user-centric improvements. Each milestone is a step closer to a fully realized vision of an agentic economy—one where agents are ubiquitous, assisting individuals,
|
140 |
+
businesses, and entire industries in achieving their goals more efficiently.
|
141 |
+
|
142 |
+
By **tracking key metrics**, such as growth in agent numbers, the rate of agent deployment per user, and reducing churn, we ensure that Swarms not only grows in size but also in effectiveness, adoption, and user satisfaction. Through a combination of infrastructure development, community engagement, incentives, and constant user feedback, we will create an ecosystem where agents thrive, users are empowered, and the entire platform evolves towards our ambitious vision.
|
143 |
+
|
144 |
+
This is the journey of Swarms—**a journey towards redefining how we interact with AI, solve complex problems, and enhance productivity**. With each milestone, we get closer to a future where swarms of agents are the bedrock of human-machine collaboration and an integral part of our daily lives. The journey ahead is one of
|
145 |
+
transformation, creativity, and collaboration, as we work together to create an AI-driven world that benefits everyone, enabling us to achieve more than we ever thought
|
146 |
+
possible. Our commitment to building an agentic ecosystem is unwavering, and we are excited to see the incredible impact that swarms of agents will have on the future of work, innovation, and human potential.
|
docs/corporate/architecture.md
ADDED
@@ -0,0 +1,358 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Architecture
|
2 |
+
|
3 |
+
## **1. Introduction**
|
4 |
+
|
5 |
+
In today's rapidly evolving digital world, harnessing the collaborative power of multiple computational agents is more crucial than ever. 'Swarms' represents a bold stride in this direction—a scalable and dynamic framework designed to enable swarms of agents to function in harmony and tackle complex tasks. This document serves as a comprehensive guide, elucidating the underlying architecture and strategies pivotal to realizing the Swarms vision.
|
6 |
+
|
7 |
+
---
|
8 |
+
|
9 |
+
## **2. The Vision**
|
10 |
+
|
11 |
+
At its heart, the Swarms framework seeks to emulate the collaborative efficiency witnessed in natural systems, like ant colonies or bird flocks. These entities, though individually simple, achieve remarkable outcomes through collaboration. Similarly, Swarms will unleash the collective potential of numerous agents, operating cohesively.
|
12 |
+
|
13 |
+
---
|
14 |
+
|
15 |
+
## **3. Architecture Overview**
|
16 |
+
|
17 |
+
### **3.1 Agent Level**
|
18 |
+
The base level that serves as the building block for all further complexity.
|
19 |
+
|
20 |
+
#### Mechanics:
|
21 |
+
* **Model**: At its core, each agent harnesses a powerful model like OpenAI's GPT.
|
22 |
+
* **Vectorstore**: A memory structure allowing agents to store and retrieve information.
|
23 |
+
* **Tools**: Utilities and functionalities that aid in the agent's task execution.
|
24 |
+
|
25 |
+
#### Interaction:
|
26 |
+
Agents interact with the external world through their model and tools. The Vectorstore aids in retaining knowledge and facilitating inter-agent communication.
|
27 |
+
|
28 |
+
### **3.2 Worker Infrastructure Level**
|
29 |
+
Building on the agent foundation, enhancing capability and readiness for swarm integration.
|
30 |
+
|
31 |
+
#### Mechanics:
|
32 |
+
* **Human Input Integration**: Enables agents to accept and understand human-provided instructions.
|
33 |
+
* **Unique Identifiers**: Assigns each agent a unique ID to facilitate tracking and communication.
|
34 |
+
* **Asynchronous Tools**: Bolsters agents' capability to multitask and interact in real-time.
|
35 |
+
|
36 |
+
#### Interaction:
|
37 |
+
Each worker is an enhanced agent, capable of operating independently or in sync with its peers, allowing for dynamic, scalable operations.
|
38 |
+
|
39 |
+
### **3.3 Swarm Level**
|
40 |
+
Multiple Worker Nodes orchestrated into a synchronized, collaborative entity.
|
41 |
+
|
42 |
+
#### Mechanics:
|
43 |
+
* **Orchestrator**: The maestro, responsible for directing the swarm, task allocation, and communication.
|
44 |
+
* **Scalable Communication Layer**: Facilitates interactions among nodes and between nodes and the orchestrator.
|
45 |
+
* **Task Assignment & Completion Protocols**: Structured procedures ensuring tasks are efficiently distributed and concluded.
|
46 |
+
|
47 |
+
#### Interaction:
|
48 |
+
Nodes collaborate under the orchestrator's guidance, ensuring tasks are partitioned appropriately, executed, and results consolidated.
|
49 |
+
|
50 |
+
### **3.4 Hivemind Level**
|
51 |
+
Envisioned as a 'Swarm of Swarms'. An upper echelon of collaboration.
|
52 |
+
|
53 |
+
#### Mechanics:
|
54 |
+
* **Hivemind Orchestrator**: Oversees multiple swarm orchestrators, ensuring harmony on a grand scale.
|
55 |
+
* **Inter-Swarm Communication Protocols**: Dictates how swarms interact, exchange information, and co-execute tasks.
|
56 |
+
|
57 |
+
#### Interaction:
|
58 |
+
Multiple swarms, each a formidable force, combine their prowess under the Hivemind. This level tackles monumental tasks by dividing them among swarms.
|
59 |
+
|
60 |
+
---
|
61 |
+
|
62 |
+
## **4. Building the Framework: A Task Checklist**
|
63 |
+
|
64 |
+
### **4.1 Foundations: Agent Level**
|
65 |
+
* Define and standardize agent properties.
|
66 |
+
* Integrate desired model (e.g., OpenAI's GPT) with agent.
|
67 |
+
* Implement Vectorstore mechanisms: storage, retrieval, and communication protocols.
|
68 |
+
* Incorporate essential tools and utilities.
|
69 |
+
* Conduct preliminary testing: Ensure agents can execute basic tasks and utilize the Vectorstore.
|
70 |
+
|
71 |
+
### **4.2 Enhancements: Worker Infrastructure Level**
|
72 |
+
* Interface agents with human input mechanisms.
|
73 |
+
* Assign and manage unique identifiers for each worker.
|
74 |
+
* Integrate asynchronous capabilities: Ensure real-time response and multitasking.
|
75 |
+
* Test worker nodes for both solitary and collaborative tasks.
|
76 |
+
|
77 |
+
### **4.3 Cohesion: Swarm Level**
|
78 |
+
* Design and develop the orchestrator: Ensure it can manage multiple worker nodes.
|
79 |
+
* Establish a scalable and efficient communication layer.
|
80 |
+
* Implement task distribution and retrieval protocols.
|
81 |
+
* Test swarms for efficiency, scalability, and robustness.
|
82 |
+
|
83 |
+
### **4.4 Apex Collaboration: Hivemind Level**
|
84 |
+
* Build the Hivemind Orchestrator: Ensure it can oversee multiple swarms.
|
85 |
+
* Define inter-swarm communication, prioritization, and task-sharing protocols.
|
86 |
+
* Develop mechanisms to balance loads and optimize resource utilization across swarms.
|
87 |
+
* Thoroughly test the Hivemind level for macro-task execution.
|
88 |
+
|
89 |
+
---
|
90 |
+
|
91 |
+
## **5. Integration and Communication Mechanisms**
|
92 |
+
|
93 |
+
### **5.1 Vectorstore as the Universal Communication Layer**
|
94 |
+
Serving as the memory and communication backbone, the Vectorstore must:
|
95 |
+
* Facilitate rapid storage and retrieval of high-dimensional vectors.
|
96 |
+
* Enable similarity-based lookups: Crucial for recognizing patterns or finding similar outputs.
|
97 |
+
* Scale seamlessly as agent count grows.
|
98 |
+
|
99 |
+
### **5.2 Orchestrator-Driven Communication**
|
100 |
+
* Orchestrators, both at the swarm and hivemind level, should employ adaptive algorithms to optimally distribute tasks.
|
101 |
+
* Ensure real-time monitoring of task execution and worker node health.
|
102 |
+
* Integrate feedback loops: Allow for dynamic task reassignment in case of node failures or inefficiencies.
|
103 |
+
|
104 |
+
---
|
105 |
+
|
106 |
+
## **6. Conclusion & Forward Path**
|
107 |
+
|
108 |
+
The Swarms framework, once realized, will usher in a new era of computational efficiency and collaboration. While the roadmap ahead is intricate, with diligent planning, development, and testing, Swarms will redefine the boundaries of collaborative computing.
|
109 |
+
|
110 |
+
--------
|
111 |
+
|
112 |
+
|
113 |
+
# Overview
|
114 |
+
|
115 |
+
### 1. Model
|
116 |
+
|
117 |
+
**Overview:**
|
118 |
+
The foundational level where a trained model (e.g., OpenAI GPT model) is initialized. It's the base on which further abstraction levels build upon. It provides the core capabilities to perform tasks, answer queries, etc.
|
119 |
+
|
120 |
+
**Diagram:**
|
121 |
+
```
|
122 |
+
[ Model (openai) ]
|
123 |
+
```
|
124 |
+
|
125 |
+
### 2. Agent Level
|
126 |
+
|
127 |
+
**Overview:**
|
128 |
+
At the agent level, the raw model is coupled with tools and a vector store, allowing it to be more than just a model. The agent can now remember, use tools, and become a more versatile entity ready for integration into larger systems.
|
129 |
+
|
130 |
+
**Diagram:**
|
131 |
+
```
|
132 |
+
+-----------+
|
133 |
+
| Agent |
|
134 |
+
| +-------+ |
|
135 |
+
| | Model | |
|
136 |
+
| +-------+ |
|
137 |
+
| +-----------+ |
|
138 |
+
| | VectorStore | |
|
139 |
+
| +-----------+ |
|
140 |
+
| +-------+ |
|
141 |
+
| | Tools | |
|
142 |
+
| +-------+ |
|
143 |
+
+-----------+
|
144 |
+
```
|
145 |
+
|
146 |
+
### 3. Worker Infrastructure Level
|
147 |
+
|
148 |
+
**Overview:**
|
149 |
+
The worker infrastructure is a step above individual agents. Here, an agent is paired with additional utilities like human input and other tools, making it a more advanced, responsive unit capable of complex tasks.
|
150 |
+
|
151 |
+
**Diagram:**
|
152 |
+
```
|
153 |
+
+----------------+
|
154 |
+
| WorkerNode |
|
155 |
+
| +-----------+ |
|
156 |
+
| | Agent | |
|
157 |
+
| | +-------+ | |
|
158 |
+
| | | Model | | |
|
159 |
+
| | +-------+ | |
|
160 |
+
| | +-------+ | |
|
161 |
+
| | | Tools | | |
|
162 |
+
| | +-------+ | |
|
163 |
+
| +-----------+ |
|
164 |
+
| |
|
165 |
+
| +-----------+ |
|
166 |
+
| |Human Input| |
|
167 |
+
| +-----------+ |
|
168 |
+
| |
|
169 |
+
| +-------+ |
|
170 |
+
| | Tools | |
|
171 |
+
| +-------+ |
|
172 |
+
+----------------+
|
173 |
+
```
|
174 |
+
|
175 |
+
### 4. Swarm Level
|
176 |
+
|
177 |
+
**Overview:**
|
178 |
+
At the swarm level, the orchestrator is central. It's responsible for assigning tasks to worker nodes, monitoring their completion, and handling the communication layer (for example, through a vector store or another universal communication mechanism) between worker nodes.
|
179 |
+
|
180 |
+
**Diagram:**
|
181 |
+
```
|
182 |
+
+------------+
|
183 |
+
|Orchestrator|
|
184 |
+
+------------+
|
185 |
+
|
|
186 |
+
+---------------------------+
|
187 |
+
| |
|
188 |
+
| Swarm-level Communication|
|
189 |
+
| Layer (e.g. |
|
190 |
+
| Vector Store) |
|
191 |
+
+---------------------------+
|
192 |
+
/ | \
|
193 |
+
+---------------+ +---------------+ +---------------+
|
194 |
+
|WorkerNode 1 | |WorkerNode 2 | |WorkerNode n |
|
195 |
+
| | | | | |
|
196 |
+
+---------------+ +---------------+ +---------------+
|
197 |
+
| Task Assigned | Task Completed | Communication |
|
198 |
+
```
|
199 |
+
|
200 |
+
### 5. Hivemind Level
|
201 |
+
|
202 |
+
**Overview:**
|
203 |
+
At the Hivemind level, it's a multi-swarm setup, with an upper-layer orchestrator managing multiple swarm-level orchestrators. The Hivemind orchestrator is responsible for broader tasks like assigning macro-tasks to swarms, handling inter-swarm communications, and ensuring the overall system is functioning smoothly.
|
204 |
+
|
205 |
+
**Diagram:**
|
206 |
+
```
|
207 |
+
+--------+
|
208 |
+
|Hivemind|
|
209 |
+
+--------+
|
210 |
+
|
|
211 |
+
+--------------+
|
212 |
+
|Hivemind |
|
213 |
+
|Orchestrator |
|
214 |
+
+--------------+
|
215 |
+
/ | \
|
216 |
+
+------------+ +------------+ +------------+
|
217 |
+
|Orchestrator| |Orchestrator| |Orchestrator|
|
218 |
+
+------------+ +------------+ +------------+
|
219 |
+
| | |
|
220 |
+
+--------------+ +--------------+ +--------------+
|
221 |
+
| Swarm-level| | Swarm-level| | Swarm-level|
|
222 |
+
|Communication| |Communication| |Communication|
|
223 |
+
| Layer | | Layer | | Layer |
|
224 |
+
+--------------+ +--------------+ +--------------+
|
225 |
+
/ \ / \ / \
|
226 |
+
+-------+ +-------+ +-------+ +-------+ +-------+
|
227 |
+
|Worker | |Worker | |Worker | |Worker | |Worker |
|
228 |
+
| Node | | Node | | Node | | Node | | Node |
|
229 |
+
+-------+ +-------+ +-------+ +-------+ +-------+
|
230 |
+
```
|
231 |
+
|
232 |
+
This setup allows the Hivemind level to operate at a grander scale, with the capability to manage hundreds or even thousands of worker nodes across multiple swarms efficiently.
|
233 |
+
|
234 |
+
|
235 |
+
|
236 |
+
-------
|
237 |
+
# **Swarms Framework Development Strategy Checklist**
|
238 |
+
|
239 |
+
## **Introduction**
|
240 |
+
|
241 |
+
The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
|
242 |
+
|
243 |
+
---
|
244 |
+
|
245 |
+
## **1. Agent Level Development**
|
246 |
+
|
247 |
+
### **1.1 Model Integration**
|
248 |
+
- [ ] Research the most suitable models (e.g., OpenAI's GPT).
|
249 |
+
- [ ] Design an API for the agent to call the model.
|
250 |
+
- [ ] Implement error handling when model calls fail.
|
251 |
+
- [ ] Test the model with sample data for accuracy and speed.
|
252 |
+
|
253 |
+
### **1.2 Vectorstore Implementation**
|
254 |
+
- [ ] Design the schema for the vector storage system.
|
255 |
+
- [ ] Implement storage methods to add, delete, and update vectors.
|
256 |
+
- [ ] Develop retrieval methods with optimization for speed.
|
257 |
+
- [ ] Create protocols for vector-based communication between agents.
|
258 |
+
- [ ] Conduct stress tests to ascertain storage and retrieval speed.
|
259 |
+
|
260 |
+
### **1.3 Tools & Utilities Integration**
|
261 |
+
- [ ] List out essential tools required for agent functionality.
|
262 |
+
- [ ] Develop or integrate APIs for each tool.
|
263 |
+
- [ ] Implement error handling and logging for tool interactions.
|
264 |
+
- [ ] Validate tools integration with unit tests.
|
265 |
+
|
266 |
+
---
|
267 |
+
|
268 |
+
## **2. Worker Infrastructure Level Development**
|
269 |
+
|
270 |
+
### **2.1 Human Input Integration**
|
271 |
+
- [ ] Design a UI/UX for human interaction with worker nodes.
|
272 |
+
- [ ] Create APIs for input collection.
|
273 |
+
- [ ] Implement input validation and error handling.
|
274 |
+
- [ ] Test human input methods for clarity and ease of use.
|
275 |
+
|
276 |
+
### **2.2 Unique Identifier System**
|
277 |
+
- [ ] Research optimal formats for unique ID generation.
|
278 |
+
- [ ] Develop methods for generating and assigning IDs to agents.
|
279 |
+
- [ ] Implement a tracking system to manage and monitor agents via IDs.
|
280 |
+
- [ ] Validate the uniqueness and reliability of the ID system.
|
281 |
+
|
282 |
+
### **2.3 Asynchronous Operation Tools**
|
283 |
+
- [ ] Incorporate libraries/frameworks to enable asynchrony.
|
284 |
+
- [ ] Ensure tasks within an agent can run in parallel without conflict.
|
285 |
+
- [ ] Test asynchronous operations for efficiency improvements.
|
286 |
+
|
287 |
+
---
|
288 |
+
|
289 |
+
## **3. Swarm Level Development**
|
290 |
+
|
291 |
+
### **3.1 Orchestrator Design & Development**
|
292 |
+
- [ ] Draft a blueprint of orchestrator functionalities.
|
293 |
+
- [ ] Implement methods for task distribution among worker nodes.
|
294 |
+
- [ ] Develop communication protocols for the orchestrator to monitor workers.
|
295 |
+
- [ ] Create feedback systems to detect and address worker node failures.
|
296 |
+
- [ ] Test orchestrator with a mock swarm to ensure efficient task allocation.
|
297 |
+
|
298 |
+
### **3.2 Communication Layer Development**
|
299 |
+
- [ ] Select a suitable communication protocol/framework (e.g., gRPC, WebSockets).
|
300 |
+
- [ ] Design the architecture for scalable, low-latency communication.
|
301 |
+
- [ ] Implement methods for sending, receiving, and broadcasting messages.
|
302 |
+
- [ ] Test communication layer for reliability, speed, and error handling.
|
303 |
+
|
304 |
+
### **3.3 Task Management Protocols**
|
305 |
+
- [ ] Develop a system to queue, prioritize, and allocate tasks.
|
306 |
+
- [ ] Implement methods for real-time task status tracking.
|
307 |
+
- [ ] Create a feedback loop for completed tasks.
|
308 |
+
- [ ] Test task distribution, execution, and feedback systems for efficiency.
|
309 |
+
|
310 |
+
---
|
311 |
+
|
312 |
+
## **4. Hivemind Level Development**
|
313 |
+
|
314 |
+
### **4.1 Hivemind Orchestrator Development**
|
315 |
+
- [ ] Extend swarm orchestrator functionalities to manage multiple swarms.
|
316 |
+
- [ ] Create inter-swarm communication protocols.
|
317 |
+
- [ ] Implement load balancing mechanisms to distribute tasks across swarms.
|
318 |
+
- [ ] Validate hivemind orchestrator functionalities with multi-swarm setups.
|
319 |
+
|
320 |
+
### **4.2 Inter-Swarm Communication Protocols**
|
321 |
+
- [ ] Design methods for swarms to exchange data.
|
322 |
+
- [ ] Implement data reconciliation methods for swarms working on shared tasks.
|
323 |
+
- [ ] Test inter-swarm communication for efficiency and data integrity.
|
324 |
+
|
325 |
+
---
|
326 |
+
|
327 |
+
## **5. Scalability & Performance Testing**
|
328 |
+
|
329 |
+
- [ ] Simulate heavy loads to test the limits of the framework.
|
330 |
+
- [ ] Identify and address bottlenecks in both communication and computation.
|
331 |
+
- [ ] Conduct speed tests under different conditions.
|
332 |
+
- [ ] Test the system's responsiveness under various levels of stress.
|
333 |
+
|
334 |
+
---
|
335 |
+
|
336 |
+
## **6. Documentation & User Guide**
|
337 |
+
|
338 |
+
- [ ] Develop detailed documentation covering architecture, setup, and usage.
|
339 |
+
- [ ] Create user guides with step-by-step instructions.
|
340 |
+
- [ ] Incorporate visual aids, diagrams, and flowcharts for clarity.
|
341 |
+
- [ ] Update documentation regularly with new features and improvements.
|
342 |
+
|
343 |
+
---
|
344 |
+
|
345 |
+
## **7. Continuous Integration & Deployment**
|
346 |
+
|
347 |
+
- [ ] Setup CI/CD pipelines for automated testing and deployment.
|
348 |
+
- [ ] Ensure automatic rollback in case of deployment failures.
|
349 |
+
- [ ] Integrate code quality and security checks in the pipeline.
|
350 |
+
- [ ] Document deployment strategies and best practices.
|
351 |
+
|
352 |
+
---
|
353 |
+
|
354 |
+
## **Conclusion**
|
355 |
+
|
356 |
+
The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
|
357 |
+
|
358 |
+
(Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
|
docs/corporate/bounties.md
ADDED
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Bounty Program
|
2 |
+
|
3 |
+
Our bounty program is an exciting opportunity for contributors to help us build the future of Swarms. By participating, you can earn rewards while contributing to a project that aims to revolutionize digital activity.
|
4 |
+
|
5 |
+
Here's how it works:
|
6 |
+
|
7 |
+
1. **Check out our Roadmap**: We've shared our roadmap detailing our short and long-term goals. These are the areas where we're seeking contributions.
|
8 |
+
|
9 |
+
2. **Pick a Task**: Choose a task from the roadmap that aligns with your skills and interests. If you're unsure, you can reach out to our team for guidance.
|
10 |
+
|
11 |
+
3. **Get to Work**: Once you've chosen a task, start working on it. Remember, quality is key. We're looking for contributions that truly make a difference.
|
12 |
+
|
13 |
+
4. **Submit your Contribution**: Once your work is complete, submit it for review. We'll evaluate your contribution based on its quality, relevance, and the value it brings to Swarms.
|
14 |
+
|
15 |
+
5. **Earn Rewards**: If your contribution is approved, you'll earn a bounty. The amount of the bounty depends on the complexity of the task, the quality of your work, and the value it brings to Swarms.
|
16 |
+
|
17 |
+
## The Three Phases of Our Bounty Program
|
18 |
+
|
19 |
+
### Phase 1: Building the Foundation
|
20 |
+
In the first phase, our focus is on building the basic infrastructure of Swarms. This includes developing key components like the Swarms class, integrating essential tools, and establishing task completion and evaluation logic. We'll also start developing our testing and evaluation framework during this phase. If you're interested in foundational work and have a knack for building robust, scalable systems, this phase is for you.
|
21 |
+
|
22 |
+
### Phase 2: Enhancing the System
|
23 |
+
In the second phase, we'll focus on enhancing Swarms by integrating more advanced features, improving the system's efficiency, and refining our testing and evaluation framework. This phase involves more complex tasks, so if you enjoy tackling challenging problems and contributing to the development of innovative features, this is the phase for you.
|
24 |
+
|
25 |
+
### Phase 3: Towards Super-Intelligence
|
26 |
+
The third phase of our bounty program is the most exciting - this is where we aim to achieve super-intelligence. In this phase, we'll be working on improving the swarm's capabilities, expanding its skills, and fine-tuning the system based on real-world testing and feedback. If you're excited about the future of AI and want to contribute to a project that could potentially transform the digital world, this is the phase for you.
|
27 |
+
|
28 |
+
Remember, our roadmap is a guide, and we encourage you to bring your own ideas and creativity to the table. We believe that every contribution, no matter how small, can make a difference. So join us on this exciting journey and help us create the future of Swarms.
|
29 |
+
|
30 |
+
**To participate in our bounty program, visit the [Swarms Bounty Program Page](https://swarms.ai/bounty).** Let's build the future together!
|
31 |
+
|
32 |
+
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
## Bounties for Roadmap Items
|
37 |
+
|
38 |
+
To accelerate the development of Swarms and to encourage more contributors to join our journey towards automating every digital activity in existence, we are announcing a Bounty Program for specific roadmap items. Each bounty will be rewarded based on the complexity and importance of the task. Below are the items available for bounty:
|
39 |
+
|
40 |
+
1. **Multi-Agent Debate Integration**: $2000
|
41 |
+
2. **Meta Prompting Integration**: $1500
|
42 |
+
3. **Swarms Class**: $1500
|
43 |
+
4. **Integration of Additional Tools**: $1000
|
44 |
+
5. **Task Completion and Evaluation Logic**: $2000
|
45 |
+
6. **Ocean Integration**: $2500
|
46 |
+
7. **Improved Communication**: $2000
|
47 |
+
8. **Testing and Evaluation**: $1500
|
48 |
+
9. **Worker Swarm Class**: $2000
|
49 |
+
10. **Documentation**: $500
|
50 |
+
|
51 |
+
For each bounty task, there will be a strict evaluation process to ensure the quality of the contribution. This process includes a thorough review of the code and extensive testing to ensure it meets our standards.
|
52 |
+
|
53 |
+
# 3-Phase Testing Framework
|
54 |
+
|
55 |
+
To ensure the quality and efficiency of the Swarm, we will introduce a 3-phase testing framework which will also serve as our evaluation criteria for each of the bounty tasks.
|
56 |
+
|
57 |
+
## Phase 1: Unit Testing
|
58 |
+
In this phase, individual modules will be tested to ensure that they work correctly in isolation. Unit tests will be designed for all functions and methods, with an emphasis on edge cases.
|
59 |
+
|
60 |
+
## Phase 2: Integration Testing
|
61 |
+
After passing unit tests, we will test the integration of different modules to ensure they work correctly together. This phase will also test the interoperability of the Swarm with external systems and libraries.
|
62 |
+
|
63 |
+
## Phase 3: Benchmarking & Stress Testing
|
64 |
+
In the final phase, we will perform benchmarking and stress tests. We'll push the limits of the Swarm under extreme conditions to ensure it performs well in real-world scenarios. This phase will measure the performance, speed, and scalability of the Swarm under high load conditions.
|
65 |
+
|
66 |
+
By following this 3-phase testing framework, we aim to develop a reliable, high-performing, and scalable Swarm that can automate all digital activities.
|
67 |
+
|
68 |
+
# Reverse Engineering to Reach Phase 3
|
69 |
+
|
70 |
+
To reach the Phase 3 level, we need to reverse engineer the tasks we need to complete. Here's an example of what this might look like:
|
71 |
+
|
72 |
+
1. **Set Clear Expectations**: Define what success looks like for each task. Be clear about the outputs and outcomes we expect. This will guide our testing and development efforts.
|
73 |
+
|
74 |
+
2. **Develop Testing Scenarios**: Create a comprehensive list of testing scenarios that cover both common and edge cases. This will help us ensure that our Swarm can handle a wide range of situations.
|
75 |
+
|
76 |
+
3. **Write Test Cases**: For each scenario, write detailed test cases that outline the exact steps to be followed, the inputs to be used, and the expected outputs.
|
77 |
+
|
78 |
+
4. **Execute the Tests**: Run the test cases on our Swarm, making note of any issues or bugs that arise.
|
79 |
+
|
80 |
+
5. **Iterate and Improve**: Based on the results of our tests, iterate and improve our Swarm. This may involve fixing bugs, optimizing code, or redesigning parts of our system.
|
81 |
+
|
82 |
+
6. **Repeat**: Repeat this process until our Swarm meets our expectations and passes all test cases.
|
83 |
+
|
84 |
+
By following these steps, we will systematically build, test, and improve our Swarm until it reaches the Phase 3 level. This methodical approach will help us ensure that we create a reliable, high-performing, and scalable Swarm that can truly automate all digital activities.
|
85 |
+
|
86 |
+
Let's shape the future of digital automation together!
|
docs/corporate/bounty_program.md
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Swarms Bounty Program
|
2 |
+
|
3 |
+
The Swarms Bounty Program is an initiative designed to incentivize contributors to help us improve and expand the Swarms framework. With an impressive $150,000 allocated for bounties, contributors have the unique opportunity to earn generous rewards while gaining prestigious recognition in the Swarms community of over 9,000 agent engineers. This program offers more than just financial benefits; it allows contributors to play a pivotal role in advancing the field of multi-agent collaboration and AI automation, while also growing their professional skills and network. By joining the Swarms Bounty Program, you become part of an innovative movement shaping the future of technology.
|
4 |
+
|
5 |
+
## Why Contribute?
|
6 |
+
|
7 |
+
1. **Generous Rewards**: The bounty pool totals $150,000, ensuring that contributors are fairly compensated for their valuable work on successfully completed tasks. Each task comes with its own reward, reflecting its complexity and impact.
|
8 |
+
|
9 |
+
2. **Community Status**: Gain coveted recognition as a valued and active contributor within the thriving Swarms community. This status not only highlights your contributions but also builds your reputation among a network of AI engineers.
|
10 |
+
|
11 |
+
3. **Skill Development**: Collaborate on cutting-edge AI projects, hone your expertise in agent engineering, and learn practical skills that can be applied to real-world challenges in the AI domain.
|
12 |
+
|
13 |
+
4. **Networking Opportunities**: Work side-by-side with over 9,000 agent engineers in our active and supportive community. This network fosters collaboration, knowledge sharing, and mentorship opportunities that can significantly boost your career.
|
14 |
+
|
15 |
+
## How It Works
|
16 |
+
|
17 |
+
1. **Explore Issues and Tasks**:
|
18 |
+
- Visit the [Swarms GitHub Issues](https://github.com/kyegomez/swarms/issues) to find a comprehensive list of open tasks requiring attention. These issues range from coding challenges to documentation improvements, offering opportunities for contributors with various skill sets.
|
19 |
+
- Check the [Swarms Project Board](https://github.com/users/kyegomez/projects/1) for prioritized tasks and ongoing milestones. This board provides a clear view of project priorities and helps contributors align their efforts with the project's immediate goals.
|
20 |
+
|
21 |
+
2. **Claim a Bounty**:
|
22 |
+
- Identify a task that aligns with your interests and expertise.
|
23 |
+
- Comment on the issue to indicate your intent to work on it and describe your approach if necessary.
|
24 |
+
- Await approval from the Swarms team before commencing work. Approval ensures clarity and avoids duplication of efforts by other contributors.
|
25 |
+
|
26 |
+
3. **Submit Your Work**:
|
27 |
+
- Complete the task as per the outlined requirements in the issue description. Pay close attention to details to ensure your submission meets the expectations.
|
28 |
+
- Submit your pull request (PR) on GitHub with all the required elements, including documentation, test cases, or any relevant files that demonstrate your work.
|
29 |
+
- Engage with reviewers to refine your submission if requested.
|
30 |
+
|
31 |
+
4. **Earn Rewards**:
|
32 |
+
- Once your PR is reviewed, accepted, and merged into the main project, you will receive the bounty payment associated with the task.
|
33 |
+
- Your contributor status in the Swarms community will be updated, showcasing your involvement and accomplishments.
|
34 |
+
|
35 |
+
## Contribution Guidelines
|
36 |
+
To ensure high-quality contributions and streamline the process, please adhere to the following guidelines:
|
37 |
+
- Familiarize yourself with the [Swarms Contribution Guidelines](https://github.com/kyegomez/swarms/blob/main/CONTRIBUTING.md). These guidelines outline coding standards, best practices, and procedures for contributing effectively.
|
38 |
+
|
39 |
+
- Ensure your code is clean, modular, and well-documented. Contributions that adhere to the project's standards are more likely to be accepted.
|
40 |
+
|
41 |
+
- Actively communicate with the Swarms team and other contributors. Clear communication helps resolve uncertainties, avoids duplication, and fosters collaboration within the community.
|
42 |
+
|
43 |
+
## Get Involved
|
44 |
+
|
45 |
+
1. **Join the Community**:
|
46 |
+
- Become an active member of the Swarms community by joining our Discord server: [Join Now](https://discord.gg/jM3Z6M9uMq). The Discord server serves as a hub for discussions, updates, and support.
|
47 |
+
|
48 |
+
2. **Stay Updated**:
|
49 |
+
- Keep track of the latest updates, announcements, and bounty opportunities by regularly checking the Discord channel and the GitHub repository.
|
50 |
+
|
51 |
+
3. **Start Contributing**:
|
52 |
+
- Dive into the Swarms GitHub repository: [Swarms GitHub](https://github.com/kyegomez/swarms). Explore the codebase, familiarize yourself with the project structure, and identify areas where you can make an impact.
|
53 |
+
|
54 |
+
## Additional Benefits
|
55 |
+
|
56 |
+
Beyond monetary rewards, contributors gain intangible benefits that elevate their professional journey:
|
57 |
+
|
58 |
+
- **Recognition**: Your contributions will be showcased to a community of over 9,000 engineers, increasing your visibility and credibility in the AI field.
|
59 |
+
|
60 |
+
- **Portfolio Building**: Add high-impact contributions to your portfolio, demonstrating your skills and experience to potential employers or collaborators.
|
61 |
+
|
62 |
+
- **Knowledge Sharing**: Learn from and collaborate with experts in agent engineering, gaining insights into the latest advancements and best practices in the field.
|
63 |
+
|
64 |
+
## Contact Us
|
65 |
+
For any questions, support, or clarifications, reach out to the Swarms team:
|
66 |
+
|
67 |
+
- **Discord**: Engage directly with the team and fellow contributors in our active channels.
|
68 |
+
|
69 |
+
- **GitHub**: Open an issue for specific questions or suggestions related to the project. We’re here to guide and assist you at every step of your contribution journey.
|
70 |
+
|
71 |
+
---
|
72 |
+
|
73 |
+
Join us in building the future of multi-agent collaboration and AI automation. With your contributions, we can create something truly extraordinary and transformative. Together, let’s pave the way for groundbreaking advancements in technology and innovation!
|
74 |
+
|
docs/corporate/checklist.md
ADDED
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# **Swarms Framework Development Strategy Checklist**
|
2 |
+
|
3 |
+
## **Introduction**
|
4 |
+
|
5 |
+
The development of the Swarms framework requires a systematic and granular approach to ensure that each component is robust and that the overall framework is efficient and scalable. This checklist will serve as a guide to building Swarms from the ground up, breaking down tasks into small, manageable pieces.
|
6 |
+
|
7 |
+
---
|
8 |
+
|
9 |
+
## **1. Agent Level Development**
|
10 |
+
|
11 |
+
### **1.1 Model Integration**
|
12 |
+
- [ ] Research the most suitable models (e.g., OpenAI's GPT).
|
13 |
+
- [ ] Design an API for the agent to call the model.
|
14 |
+
- [ ] Implement error handling when model calls fail.
|
15 |
+
- [ ] Test the model with sample data for accuracy and speed.
|
16 |
+
|
17 |
+
### **1.2 Vectorstore Implementation**
|
18 |
+
- [ ] Design the schema for the vector storage system.
|
19 |
+
- [ ] Implement storage methods to add, delete, and update vectors.
|
20 |
+
- [ ] Develop retrieval methods with optimization for speed.
|
21 |
+
- [ ] Create protocols for vector-based communication between agents.
|
22 |
+
- [ ] Conduct stress tests to ascertain storage and retrieval speed.
|
23 |
+
|
24 |
+
### **1.3 Tools & Utilities Integration**
|
25 |
+
- [ ] List out essential tools required for agent functionality.
|
26 |
+
- [ ] Develop or integrate APIs for each tool.
|
27 |
+
- [ ] Implement error handling and logging for tool interactions.
|
28 |
+
- [ ] Validate tools integration with unit tests.
|
29 |
+
|
30 |
+
---
|
31 |
+
|
32 |
+
## **2. Worker Infrastructure Level Development**
|
33 |
+
|
34 |
+
### **2.1 Human Input Integration**
|
35 |
+
- [ ] Design a UI/UX for human interaction with worker nodes.
|
36 |
+
- [ ] Create APIs for input collection.
|
37 |
+
- [ ] Implement input validation and error handling.
|
38 |
+
- [ ] Test human input methods for clarity and ease of use.
|
39 |
+
|
40 |
+
### **2.2 Unique Identifier System**
|
41 |
+
- [ ] Research optimal formats for unique ID generation.
|
42 |
+
- [ ] Develop methods for generating and assigning IDs to agents.
|
43 |
+
- [ ] Implement a tracking system to manage and monitor agents via IDs.
|
44 |
+
- [ ] Validate the uniqueness and reliability of the ID system.
|
45 |
+
|
46 |
+
### **2.3 Asynchronous Operation Tools**
|
47 |
+
- [ ] Incorporate libraries/frameworks to enable asynchrony.
|
48 |
+
- [ ] Ensure tasks within an agent can run in parallel without conflict.
|
49 |
+
- [ ] Test asynchronous operations for efficiency improvements.
|
50 |
+
|
51 |
+
---
|
52 |
+
|
53 |
+
## **3. Swarm Level Development**
|
54 |
+
|
55 |
+
### **3.1 Orchestrator Design & Development**
|
56 |
+
- [ ] Draft a blueprint of orchestrator functionalities.
|
57 |
+
- [ ] Implement methods for task distribution among worker nodes.
|
58 |
+
- [ ] Develop communication protocols for the orchestrator to monitor workers.
|
59 |
+
- [ ] Create feedback systems to detect and address worker node failures.
|
60 |
+
- [ ] Test orchestrator with a mock swarm to ensure efficient task allocation.
|
61 |
+
|
62 |
+
### **3.2 Communication Layer Development**
|
63 |
+
- [ ] Select a suitable communication protocol/framework (e.g., gRPC, WebSockets).
|
64 |
+
- [ ] Design the architecture for scalable, low-latency communication.
|
65 |
+
- [ ] Implement methods for sending, receiving, and broadcasting messages.
|
66 |
+
- [ ] Test communication layer for reliability, speed, and error handling.
|
67 |
+
|
68 |
+
### **3.3 Task Management Protocols**
|
69 |
+
- [ ] Develop a system to queue, prioritize, and allocate tasks.
|
70 |
+
- [ ] Implement methods for real-time task status tracking.
|
71 |
+
- [ ] Create a feedback loop for completed tasks.
|
72 |
+
- [ ] Test task distribution, execution, and feedback systems for efficiency.
|
73 |
+
|
74 |
+
---
|
75 |
+
|
76 |
+
## **4. Hivemind Level Development**
|
77 |
+
|
78 |
+
### **4.1 Hivemind Orchestrator Development**
|
79 |
+
- [ ] Extend swarm orchestrator functionalities to manage multiple swarms.
|
80 |
+
- [ ] Create inter-swarm communication protocols.
|
81 |
+
- [ ] Implement load balancing mechanisms to distribute tasks across swarms.
|
82 |
+
- [ ] Validate hivemind orchestrator functionalities with multi-swarm setups.
|
83 |
+
|
84 |
+
### **4.2 Inter-Swarm Communication Protocols**
|
85 |
+
- [ ] Design methods for swarms to exchange data.
|
86 |
+
- [ ] Implement data reconciliation methods for swarms working on shared tasks.
|
87 |
+
- [ ] Test inter-swarm communication for efficiency and data integrity.
|
88 |
+
|
89 |
+
---
|
90 |
+
|
91 |
+
## **5. Scalability & Performance Testing**
|
92 |
+
|
93 |
+
- [ ] Simulate heavy loads to test the limits of the framework.
|
94 |
+
- [ ] Identify and address bottlenecks in both communication and computation.
|
95 |
+
- [ ] Conduct speed tests under different conditions.
|
96 |
+
- [ ] Test the system's responsiveness under various levels of stress.
|
97 |
+
|
98 |
+
---
|
99 |
+
|
100 |
+
## **6. Documentation & User Guide**
|
101 |
+
|
102 |
+
- [ ] Develop detailed documentation covering architecture, setup, and usage.
|
103 |
+
- [ ] Create user guides with step-by-step instructions.
|
104 |
+
- [ ] Incorporate visual aids, diagrams, and flowcharts for clarity.
|
105 |
+
- [ ] Update documentation regularly with new features and improvements.
|
106 |
+
|
107 |
+
---
|
108 |
+
|
109 |
+
## **7. Continuous Integration & Deployment**
|
110 |
+
|
111 |
+
- [ ] Setup CI/CD pipelines for automated testing and deployment.
|
112 |
+
- [ ] Ensure automatic rollback in case of deployment failures.
|
113 |
+
- [ ] Integrate code quality and security checks in the pipeline.
|
114 |
+
- [ ] Document deployment strategies and best practices.
|
115 |
+
|
116 |
+
---
|
117 |
+
|
118 |
+
## **Conclusion**
|
119 |
+
|
120 |
+
The Swarms framework represents a monumental leap in agent-based computation. This checklist provides a thorough roadmap for the framework's development, ensuring that every facet is addressed in depth. Through diligent adherence to this guide, the Swarms vision can be realized as a powerful, scalable, and robust system ready to tackle the challenges of tomorrow.
|
121 |
+
|
122 |
+
(Note: This document, given the word limit, provides a high-level overview. A full 5000-word document would delve into even more intricate details, nuances, potential pitfalls, and include considerations for security, user experience, compatibility, etc.)
|
docs/corporate/cost_analysis.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Costs Structure of Deploying Autonomous Agents
|
2 |
+
|
3 |
+
## Table of Contents
|
4 |
+
|
5 |
+
1. Introduction
|
6 |
+
2. Our Time: Generating System Prompts and Custom Tools
|
7 |
+
3. Consultancy Fees
|
8 |
+
4. Model Inference Infrastructure
|
9 |
+
5. Deployment and Continual Maintenance
|
10 |
+
6. Output Metrics: Blogs Generation Rates
|
11 |
+
|
12 |
+
---
|
13 |
+
|
14 |
+
## 1. Introduction
|
15 |
+
|
16 |
+
Autonomous agents are revolutionizing various industries, from self-driving cars to chatbots and customer service solutions. The prospect of automation and improved efficiency makes these agents attractive investments. However, like any other technological solution, deploying autonomous agents involves several cost elements that organizations need to consider carefully. This comprehensive guide aims to provide an exhaustive outline of the costs associated with deploying autonomous agents.
|
17 |
+
|
18 |
+
---
|
19 |
+
|
20 |
+
## 2. Our Time: Generating System Prompts and Custom Tools
|
21 |
+
|
22 |
+
### Description
|
23 |
+
|
24 |
+
The deployment of autonomous agents often requires a substantial investment of time to develop system prompts and custom tools tailored to specific operational needs.
|
25 |
+
|
26 |
+
### Costs
|
27 |
+
|
28 |
+
| Task | Time Required (Hours) | Cost per Hour ($) | Total Cost ($) |
|
29 |
+
| ------------------------ | --------------------- | ----------------- | -------------- |
|
30 |
+
| System Prompts Design | 50 | 100 | 5,000 |
|
31 |
+
| Custom Tools Development | 100 | 100 | 10,000 |
|
32 |
+
| **Total** | **150** | | **15,000** |
|
33 |
+
|
34 |
+
---
|
35 |
+
|
36 |
+
## 3. Consultancy Fees
|
37 |
+
|
38 |
+
### Description
|
39 |
+
|
40 |
+
Consultation is often necessary for navigating the complexities of autonomous agents. This includes system assessment, customization, and other essential services.
|
41 |
+
|
42 |
+
### Costs
|
43 |
+
|
44 |
+
| Service | Fees ($) |
|
45 |
+
| -------------------- | --------- |
|
46 |
+
| Initial Assessment | 5,000 |
|
47 |
+
| System Customization | 7,000 |
|
48 |
+
| Training | 3,000 |
|
49 |
+
| **Total** | **15,000**|
|
50 |
+
|
51 |
+
---
|
52 |
+
|
53 |
+
## 4. Model Inference Infrastructure
|
54 |
+
|
55 |
+
### Description
|
56 |
+
|
57 |
+
The hardware and software needed for the agent's functionality, known as the model inference infrastructure, form a significant part of the costs.
|
58 |
+
|
59 |
+
### Costs
|
60 |
+
|
61 |
+
| Component | Cost ($) |
|
62 |
+
| -------------------- | --------- |
|
63 |
+
| Hardware | 10,000 |
|
64 |
+
| Software Licenses | 2,000 |
|
65 |
+
| Cloud Services | 3,000 |
|
66 |
+
| **Total** | **15,000**|
|
67 |
+
|
68 |
+
---
|
69 |
+
|
70 |
+
## 5. Deployment and Continual Maintenance
|
71 |
+
|
72 |
+
### Description
|
73 |
+
|
74 |
+
Once everything is in place, deploying the autonomous agents and their ongoing maintenance are the next major cost factors.
|
75 |
+
|
76 |
+
### Costs
|
77 |
+
|
78 |
+
| Task | Monthly Cost ($) | Annual Cost ($) |
|
79 |
+
| ------------------- | ---------------- | --------------- |
|
80 |
+
| Deployment | 5,000 | 60,000 |
|
81 |
+
| Ongoing Maintenance | 1,000 | 12,000 |
|
82 |
+
| **Total** | **6,000** | **72,000** |
|
83 |
+
|
84 |
+
---
|
85 |
+
|
86 |
+
## 6. Output Metrics: Blogs Generation Rates
|
87 |
+
|
88 |
+
### Description
|
89 |
+
|
90 |
+
To provide a sense of what an investment in autonomous agents can yield, we offer the following data regarding blogs that can be generated as an example of output.
|
91 |
+
|
92 |
+
### Blogs Generation Rates
|
93 |
+
|
94 |
+
| Timeframe | Number of Blogs |
|
95 |
+
|-----------|-----------------|
|
96 |
+
| Per Day | 20 |
|
97 |
+
| Per Week | 140 |
|
98 |
+
| Per Month | 600 |
|
99 |
+
|
100 |
+
|
docs/corporate/culture.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Swarms Corp Culture Document
|
2 |
+
|
3 |
+
## **Our Mission and Purpose**
|
4 |
+
At Swarms Corp, we believe in more than just building technology. We are advancing humanity by pioneering systems that allow agents—both AI and human—to collaborate seamlessly, working toward the betterment of society and unlocking a future of abundance. Our mission is everything, and each of us is here because we understand the transformative potential of our work. We are not just a company; we are a movement aimed at reshaping the future. We strive to create systems that can tackle the most complex challenges facing humanity, from climate change to inequality, with solutions that are powered by collective intelligence.
|
5 |
+
|
6 |
+
Our purpose goes beyond just technological advancement. We are here to create tools that empower people, uplift communities, and set a new standard for what technology can achieve when the mission is clear and the commitment is unwavering. We see every project as a step toward something greater—an abundant future where human potential is limitless and artificial intelligence serves as a powerful ally to mankind.
|
7 |
+
|
8 |
+
## **Values We Live By**
|
9 |
+
|
10 |
+
### 1. **Hard Work: No Stone Unturned**
|
11 |
+
We believe that hard work is the foundation of all great achievements. At Swarms Corp, each member of the team is dedicated to putting in the effort required to solve complex problems. This isn’t just about long hours—it’s about focused, intentional work that leads to breakthroughs. We hold each other to high standards, and we don’t shy away from the hard paths when the mission calls for it. Every challenge we face is an opportunity to demonstrate our resilience and our commitment to excellence. We understand that the pursuit of groundbreaking innovation demands not just effort, but a relentless curiosity and the courage to face the unknown.
|
12 |
+
|
13 |
+
At Swarms Corp, we respect the grind because we know that transformative change doesn’t happen overnight. It requires continuous effort, sacrifice, and an unwavering focus on the task at hand. We celebrate hard work, not because it’s difficult, but because we understand its potential to transform ambitious ideas into tangible solutions. We honor the sweat equity that goes into building something that can truly make a difference.
|
14 |
+
|
15 |
+
### 2. **Mission Above Everything**
|
16 |
+
Our mission is our guiding star. Every decision, every task, and every project must align with our overarching purpose: advancing humanity and creating a post-scarcity world. This means sometimes putting the collective goal ahead of individual preferences or comfort. We’re here to do something much larger than ourselves, and we prioritize the mission with relentless commitment. We know that personal sacrifices will often be necessary, and we embrace that reality because the rewards of our mission are far greater than any individual gain.
|
17 |
+
|
18 |
+
When we say "mission above everything," we mean that our focus is not just on immediate success, but on creating a lasting impact that will benefit future generations. Our mission provides meaning and direction to our daily efforts, and we see every task as a small yet crucial part of our broader vision. We remind ourselves constantly of why we are here and who we are working for—not just our customers or stakeholders, but humanity as a whole.
|
19 |
+
|
20 |
+
### 3. **Finding the Shortest Path**
|
21 |
+
Innovation thrives on efficiency. At Swarms Corp, we value finding the shortest, most effective paths to reach our goals. We encourage everyone to question the status quo, challenge existing processes, and ask, “Is there a better way to do this?” Creativity means finding new routes—whether by leveraging automation, questioning outdated steps, or collaborating to uncover insights faster. We honor those who seek smarter paths over conventional ones. Efficiency is not just about saving time—it’s about maximizing impact and ensuring that every ounce of effort drives meaningful progress.
|
22 |
+
|
23 |
+
Finding the shortest path is about eliminating unnecessary complexity and focusing our energy on what truly matters. We encourage a culture of continuous improvement, where each team member is empowered to innovate on processes, tools, and methodologies. The shortest path does not mean cutting corners—it means removing obstacles, optimizing workflows, and focusing on high-leverage activities that bring us closer to our mission. We celebrate those who find elegant, effective solutions that others might overlook.
|
24 |
+
|
25 |
+
### 4. **Advancing Humanity**
|
26 |
+
The ultimate goal of everything we do is to elevate humanity. We envision a world where intelligence—both human and artificial—works in harmony to improve lives, solve global challenges, and expand possibilities. This ethos drives our work, whether it’s developing advanced AI systems, collaborating with others to push technological boundaries, or thinking deeply about how our creations can impact society in positive ways. Every line of code, every idea, and every strategy should move us closer to this vision.
|
27 |
+
|
28 |
+
Advancing humanity means we always think about the ethical implications of our work. We are deeply aware that the technology we create has the power to transform lives, and with that power comes the responsibility to ensure our contributions are always positive. We seek not only to push the boundaries of what technology can do but also to ensure that these advancements are inclusive and equitable. Our focus is on building a future where every person has access to the tools and opportunities they need to thrive.
|
29 |
+
|
30 |
+
Our vision is to bridge the gap between technology and humanity’s most pressing needs. We aim to democratize intelligence, making it available for everyone, regardless of their background or resources. This is how we advance humanity—not just through technological feats, but by ensuring that our innovations serve the greater good and uplift everyone.
|
31 |
+
|
32 |
+
## **Our Way of Working**
|
33 |
+
|
34 |
+
- **Radical Ownership**: Each team member is not just a contributor but an owner of their domain. We take full responsibility for outcomes, follow through on our promises, and ensure that nothing falls through the cracks. We don’t wait for permission—we act, innovate, and lead. Radical ownership means understanding that our actions have a direct impact on the success of our mission. It’s about proactive problem-solving and always stepping up when we see an opportunity to make a difference.
|
35 |
+
|
36 |
+
- **Honesty and Respect**: We communicate openly and respect each other’s opinions. Tough conversations are a natural part of building something impactful. We face challenges head-on with honesty and directness while maintaining a respectful and supportive atmosphere. Honesty fosters trust, and trust is the foundation of any high-performing team. We value feedback and see it as an essential tool for growth—both for individuals and for the organization as a whole.
|
37 |
+
|
38 |
+
- **One Team, One Mission**: Collaboration isn’t just encouraged—it’s essential. We operate as a swarm, where each agent contributes to a greater goal, learning from each other, sharing knowledge, and constantly iterating together. We celebrate wins collectively and approach obstacles with a unified spirit. No one succeeds alone; every achievement is the result of collective effort. We lift each other up, and we know that our strength lies in our unity and shared purpose.
|
39 |
+
|
40 |
+
- **The Future is Ours to Shape**: Our work is inherently future-focused. We’re not satisfied with simply keeping up—we want to set the pace. Every day, we take one step closer to a future where humanity’s potential is limitless, where scarcity is eliminated, and where intelligence—human and machine—advances society. We are not passive participants in the future; we are active shapers of it. We imagine a better tomorrow, and then we take deliberate steps to create it. Our work today will define what the world looks like tomorrow.
|
41 |
+
|
42 |
+
## **Expectations**
|
43 |
+
|
44 |
+
- **Be Bold**: Don’t be afraid to take risks. Innovation requires experimentation, and sometimes that means making mistakes. We support each other in learning from failures and taking smart, calculated risks. Boldness is at the heart of progress. We want every member of Swarms Corp to feel empowered to think outside the box, propose unconventional ideas, and drive innovation. Mistakes are seen not as setbacks, but as opportunities for learning and growth.
|
45 |
+
|
46 |
+
- **Keep the Mission First**: Every decision we make should be with our mission in mind. Ask yourself how your work advances the cause of creating an abundant future. The mission is the yardstick against which we measure our efforts, ensuring that everything we do pushes us closer to our ultimate goals. We understand that the mission is bigger than any one of us, and we strive to contribute meaningfully every day.
|
47 |
+
|
48 |
+
- **Find Solutions, Not Problems**: While identifying issues is important, we value those who come with solutions. Embrace challenges as opportunities to innovate and find ways to make an impact. We foster a culture of proactive problem-solving where obstacles are seen as opportunities to exercise creativity. If something’s broken, we fix it. If there’s a better way, we find it. We expect our team members to be solution-oriented, always seeking ways to turn challenges into stepping stones for progress.
|
49 |
+
|
50 |
+
- **Think Big, Act Fast**: We’re not here to make small changes—we’re here to revolutionize how we think about intelligence, automation, and society. Dream big, but work with urgency. We are tackling problems of immense scale, and we must move with intention and speed. Thinking big means envisioning a world that is radically different and better, and acting fast means executing the steps to get us there without hesitation. We value ambition and the courage to move swiftly when the time is right.
|
51 |
+
|
52 |
+
## **Our Commitment to You**
|
53 |
+
Swarms Corp is a place for dreamers and doers, for those who are driven by purpose and are unafraid of the work required to achieve it. We commit to providing you with the tools, support, and environment you need to contribute meaningfully to our mission. We are here to advance humanity together, one agent, one solution, one breakthrough at a time. We pledge to nurture an environment that encourages creativity, collaboration, and bold thinking. Here, you will find a community that celebrates your wins, supports you through challenges, and pushes you to be your best self.
|
54 |
+
|
55 |
+
Our commitment also includes ensuring that your voice is heard. We are building the future together, and every perspective matters. We strive to create an inclusive space where diversity of thought is welcomed, and where each team member feels valued for their unique contributions. At Swarms Corp, you are not just part of a team—you are part of a mission that aims to change the course of humanity for the better. Together, we’ll make the impossible possible, one breakthrough at a time.
|
56 |
+
|
docs/corporate/data_room.md
ADDED
@@ -0,0 +1,112 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Swarms Data Room
|
2 |
+
|
3 |
+
## Table of Contents
|
4 |
+
|
5 |
+
**Introduction**
|
6 |
+
|
7 |
+
- Overview of the Company
|
8 |
+
|
9 |
+
- Vision and Mission Statement
|
10 |
+
|
11 |
+
- Executive Summary
|
12 |
+
|
13 |
+
**Corporate Documents**
|
14 |
+
|
15 |
+
- Articles of Incorporation
|
16 |
+
|
17 |
+
- Bylaws
|
18 |
+
|
19 |
+
- Shareholder Agreements
|
20 |
+
|
21 |
+
- Board Meeting Minutes
|
22 |
+
|
23 |
+
- Company Structure and Org Chart
|
24 |
+
|
25 |
+
**Financial Information**
|
26 |
+
|
27 |
+
- Historical Financial Statements
|
28 |
+
|
29 |
+
- Income Statements
|
30 |
+
|
31 |
+
- Balance Sheets
|
32 |
+
|
33 |
+
- Cash Flow Statements
|
34 |
+
|
35 |
+
- Financial Projections and Forecasts
|
36 |
+
|
37 |
+
- Cap Table
|
38 |
+
|
39 |
+
- Funding History and Use of Funds
|
40 |
+
|
41 |
+
**Products and Services**
|
42 |
+
|
43 |
+
- Detailed Descriptions of Products/Services
|
44 |
+
|
45 |
+
- Product Development Roadmap
|
46 |
+
|
47 |
+
- User Manuals and Technical Specifications
|
48 |
+
|
49 |
+
- Case Studies and Use Cases
|
50 |
+
|
51 |
+
|
52 |
+
## **Introdution**
|
53 |
+
Swarms provides automation-as-a-service through swarms of autonomous agents that work together as a team. We enable our customers to build, deploy, and scale production-grade multi-agent applications to automate real-world tasks.
|
54 |
+
|
55 |
+
### **Vision**
|
56 |
+
Our vision for 2024 is to provide the most reliable infrastructure for deploying autonomous agents into the real world through the Swarm Cloud, our premier cloud platform for the scalable deployment of Multi-Modal Autonomous Agents. The platform focuses on delivering maximum value to users by only taking a small fee when utilizing the agents for the hosted compute power needed to host the agents.
|
57 |
+
|
58 |
+
### **Executive Summary**
|
59 |
+
The Swarm Corporation aims to enable AI models to automate complex workflows and operations, not just singular low-value tasks. We believe collaboration between multiple agents can overcome limitations of individual agents for reasoning, planning, etc. This will allow automation of processes in mission-critical industries like security, logistics, and manufacturing where AI adoption is currently low.
|
60 |
+
|
61 |
+
We provide an open source framework to deploy production-grade multi-modal agents in just a few lines of code. This builds our user base, recruits talent, gets customer feedback to improve products, gains awareness and trust.
|
62 |
+
|
63 |
+
Our business model focuses on customer satisfaction, openness, integration with other tools/platforms, and production-grade reliability.
|
64 |
+
|
65 |
+
Go-to-market strategy is to get the framework to product-market fit with over 50K weekly recurring users, then secure high-value contracts in target industries. Long-term monetization via microtransactions, usage-based pricing, subscriptions.
|
66 |
+
|
67 |
+
The team has thousands of hours building and optimizing autonomous agents. Leadership includes AI engineers, product experts, open source contributors and community builders.
|
68 |
+
|
69 |
+
Key milestones: get 80K framework users in January 2024, start contracts in target verticals, introduce commercial products in 2025 with various pricing models.
|
70 |
+
|
71 |
+
### **Resources**
|
72 |
+
- [Swarm Pre-Seed Deck](https://drive.google.com/file/d/1n8o2mjORbG96uDfx4TabjnyieludYaZz/view?usp=sharing)
|
73 |
+
- [Swarm Memo](https://docs.google.com/document/d/1hS_nv_lFjCqLfnJBoF6ULY9roTbSgSuCkvXvSUSc7Lo/edit?usp=sharing)
|
74 |
+
|
75 |
+
|
76 |
+
|
77 |
+
|
78 |
+
## **Financial Documents**
|
79 |
+
This section is dedicated entirely for corporate documents.
|
80 |
+
|
81 |
+
- [Cap Table](https://docs.google.com/spreadsheets/d/1wuTWbfhYaY5Xp6nSQ9R0wDtSpwSS9coHxsjKd0UbIDc/edit?usp=sharing)
|
82 |
+
|
83 |
+
- [Cashflow Prediction Sheet](https://docs.google.com/spreadsheets/d/1HQEHCIXXMHajXMl5sj8MEfcQtWfOnD7GjHtNiocpD60/edit?usp=sharing)
|
84 |
+
|
85 |
+
|
86 |
+
------
|
87 |
+
|
88 |
+
## **Product**
|
89 |
+
Swarms is an open source framework for developers in python to enable seamless, reliable, and scalable multi-agent orchestration through modularity, customization, and precision.
|
90 |
+
|
91 |
+
- [Swarms Github Page:](https://github.com/kyegomez/swarms)
|
92 |
+
- [Swarms Memo](https://docs.google.com/document/d/1hS_nv_lFjCqLfnJBoF6ULY9roTbSgSuCkvXvSUSc7Lo/edit)
|
93 |
+
- [Swarms Project Board](https://github.com/users/kyegomez/projects/1)
|
94 |
+
- [Swarms Website](https://www.swarms.world/g)
|
95 |
+
- [Swarm Ecosystem](https://github.com/kyegomez/swarm-ecosystem)
|
96 |
+
- [Swarm Core](https://github.com/kyegomez/swarms-core)
|
97 |
+
|
98 |
+
### Product Growth Metrics
|
99 |
+
| Name | Description | Link |
|
100 |
+
|----------------------------------|---------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------|
|
101 |
+
| Total Downloads of all time | Total number of downloads for the product over its entire lifespan. | [![Downloads](https://static.pepy.tech/badge/swarms)](https://pepy.tech/project/swarms) |
|
102 |
+
| Downloads this month | Number of downloads for the product in the current month. | [![Downloads](https://static.pepy.tech/badge/swarms/month)](https://pepy.tech/project/swarms) |
|
103 |
+
| Total Downloads this week | Total number of downloads for the product in the current week. | [![Downloads](https://static.pepy.tech/badge/swarms/week)](https://pepy.tech/project/swarms) |
|
104 |
+
| Github Forks | Number of times the product's codebase has been copied for optimization, contribution, or usage. | [![GitHub forks](https://img.shields.io/github/forks/kyegomez/swarms)](https://github.com/kyegomez/swarms/network) |
|
105 |
+
| Github Stars | Number of users who have 'liked' the project. | [![GitHub stars](https://img.shields.io/github/stars/kyegomez/swarms)](https://github.com/kyegomez/swarms/stargazers) |
|
106 |
+
| Pip Module Metrics | Various project statistics such as watchers, number of contributors, date repository was created, and more. | [CLICK HERE](https://libraries.io/github/kyegomez/swarms) |
|
107 |
+
| Contribution Based Statistics | Statistics like number of contributors, lines of code changed, etc. | [HERE](https://github.com/kyegomez/swarms/graphs/contributors) |
|
108 |
+
| Github Community insights | Insights into the Github community around the product. | [Github Community insights](https://github.com/kyegomez/swarms/graphs/community) |
|
109 |
+
| Github Traffic Metrics | Metrics related to traffic, such as views and clones on Github. | [Github Traffic Metrics](https://github.com/kyegomez/swarms/graphs/traffic) |
|
110 |
+
| Issues with the framework | Current open issues for the product on Github. | [![GitHub issues](https://img.shields.io/github/issues/kyegomez/swarms)](https://github.com/kyegomez/swarms/issues) |
|
111 |
+
|
112 |
+
|
docs/corporate/demos.md
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Demo Ideas
|
2 |
+
|
3 |
+
* We could also try to create an AI influencer run by a swarm, let it create a whole identity and generate images, memes, and other content for Twitter, Reddit, etc.
|
4 |
+
|
5 |
+
* had a thought that we should have either a more general one of these or a swarm or both -- need something connecting all the calendars, events, and initiatives of all the AI communities, langchain, laion, eluther, lesswrong, gato, rob miles, chatgpt hackers, etc etc
|
6 |
+
|
7 |
+
* Swarm of AI influencers to spread marketing
|
8 |
+
|
9 |
+
* Delegation System to better organize teams: Start with a team of passionate humans and let them self-report their skills/strengths so the agent has a concept of who to delegate to, then feed the agent a huge task list (like the bullet list a few messages above) that it breaks down into actionable steps and "prompts" specific team members to complete tasks. Could even suggest breakout teams of a few people with complementary skills to tackle more complex tasks. There can also be a live board that updates each time a team member completes something, to encourage momentum and keep track of progress
|
docs/corporate/design.md
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Design Philosophy Document for Swarms
|
2 |
+
|
3 |
+
## Usable
|
4 |
+
|
5 |
+
### Objective
|
6 |
+
|
7 |
+
Our goal is to ensure that Swarms is intuitive and easy to use for all users, regardless of their level of technical expertise. This includes the developers who implement Swarms in their applications, as well as end users who interact with the implemented systems.
|
8 |
+
|
9 |
+
### Tactics
|
10 |
+
|
11 |
+
- Clear and Comprehensive Documentation: We will provide well-written and easily accessible documentation that guides users through using and understanding Swarms.
|
12 |
+
- User-Friendly APIs: We'll design clean and self-explanatory APIs that help developers to understand their purpose quickly.
|
13 |
+
- Prompt and Effective Support: We will ensure that support is readily available to assist users when they encounter problems or need help with Swarms.
|
14 |
+
|
15 |
+
## Reliable
|
16 |
+
|
17 |
+
### Objective
|
18 |
+
|
19 |
+
Swarms should be dependable and trustworthy. Users should be able to count on Swarms to perform consistently and without error or failure.
|
20 |
+
|
21 |
+
### Tactics
|
22 |
+
|
23 |
+
- Robust Error Handling: We will focus on error prevention, detection, and recovery to minimize failures in Swarms.
|
24 |
+
- Comprehensive Testing: We will apply various testing methodologies such as unit testing, integration testing, and stress testing to validate the reliability of our software.
|
25 |
+
- Continuous Integration/Continuous Delivery (CI/CD): We will use CI/CD pipelines to ensure that all changes are tested and validated before they're merged into the main branch.
|
26 |
+
|
27 |
+
## Fast
|
28 |
+
|
29 |
+
### Objective
|
30 |
+
|
31 |
+
Swarms should offer high performance and rapid response times. The system should be able to handle requests and tasks swiftly.
|
32 |
+
|
33 |
+
### Tactics
|
34 |
+
|
35 |
+
- Efficient Algorithms: We will focus on optimizing our algorithms and data structures to ensure they run as quickly as possible.
|
36 |
+
- Caching: Where appropriate, we will use caching techniques to speed up response times.
|
37 |
+
- Profiling and Performance Monitoring: We will regularly analyze the performance of Swarms to identify bottlenecks and opportunities for improvement.
|
38 |
+
|
39 |
+
## Scalable
|
40 |
+
|
41 |
+
### Objective
|
42 |
+
|
43 |
+
Swarms should be able to grow in capacity and complexity without compromising performance or reliability. It should be able to handle increased workloads gracefully.
|
44 |
+
|
45 |
+
### Tactics
|
46 |
+
|
47 |
+
- Modular Architecture: We will design Swarms using a modular architecture that allows for easy scaling and modification.
|
48 |
+
- Load Balancing: We will distribute tasks evenly across available resources to prevent overload and maximize throughput.
|
49 |
+
- Horizontal and Vertical Scaling: We will design Swarms to be capable of both horizontal (adding more machines) and vertical (adding more power to an existing machine) scaling.
|
50 |
+
|
51 |
+
### Philosophy
|
52 |
+
|
53 |
+
Swarms is designed with a philosophy of simplicity and reliability. We believe that software should be a tool that empowers users, not a hurdle that they need to overcome. Therefore, our focus is on usability, reliability, speed, and scalability. We want our users to find Swarms intuitive and dependable, fast and adaptable to their needs. This philosophy guides all of our design and development decisions.
|
54 |
+
|
55 |
+
# Swarm Architecture Design Document
|
56 |
+
|
57 |
+
## Overview
|
58 |
+
|
59 |
+
The goal of the Swarm Architecture is to provide a flexible and scalable system to build swarm intelligence models that can solve complex problems. This document details the proposed design to create a plug-and-play system, which makes it easy to create custom swarms, and provides pre-configured swarms with multi-modal agents.
|
60 |
+
|
61 |
+
## Design Principles
|
62 |
+
|
63 |
+
- **Modularity**: The system will be built in a modular fashion, allowing various components to be easily swapped or upgraded.
|
64 |
+
- **Interoperability**: Different swarm classes and components should be able to work together seamlessly.
|
65 |
+
- **Scalability**: The design should support the growth of the system by adding more components or swarms.
|
66 |
+
- **Ease of Use**: Users should be able to easily create their own swarms or use pre-configured ones with minimal configuration.
|
67 |
+
|
68 |
+
## Design Components
|
69 |
+
|
70 |
+
### BaseSwarm
|
71 |
+
|
72 |
+
The BaseSwarm is an abstract base class which defines the basic structure of a swarm and the methods that need to be implemented. Any new swarm should inherit from this class and implement the required methods.
|
73 |
+
|
74 |
+
### Swarm Classes
|
75 |
+
|
76 |
+
Various Swarm classes can be implemented inheriting from the BaseSwarm class. Each swarm class should implement the required methods for initializing the components, worker nodes, and boss node, and running the swarm.
|
77 |
+
|
78 |
+
Pre-configured swarm classes with multi-modal agents can be provided for ease of use. These classes come with a default configuration of tools and agents, which can be used out of the box.
|
79 |
+
|
80 |
+
### Tools and Agents
|
81 |
+
|
82 |
+
Tools and agents are the components that provide the actual functionality to the swarms. They can be language models, AI assistants, vector stores, or any other components that can help in problem solving.
|
83 |
+
|
84 |
+
To make the system plug-and-play, a standard interface should be defined for these components. Any new tool or agent should implement this interface, so that it can be easily plugged into the system.
|
85 |
+
|
86 |
+
## Usage
|
87 |
+
|
88 |
+
Users can either use pre-configured swarms or create their own custom swarms.
|
89 |
+
|
90 |
+
To use a pre-configured swarm, they can simply instantiate the corresponding swarm class and call the run method with the required objective.
|
91 |
+
|
92 |
+
To create a custom swarm, they need to:
|
93 |
+
|
94 |
+
1. Define a new swarm class inheriting from BaseSwarm.
|
95 |
+
2. Implement the required methods for the new swarm class.
|
96 |
+
3. Instantiate the swarm class and call the run method.
|
97 |
+
|
98 |
+
### Example
|
99 |
+
|
100 |
+
```python
|
101 |
+
# Using pre-configured swarm
|
102 |
+
swarm = PreConfiguredSwarm(openai_api_key)
|
103 |
+
swarm.run_swarms(objective)
|
104 |
+
|
105 |
+
# Creating custom swarm
|
106 |
+
class CustomSwarm(BaseSwarm):
|
107 |
+
# Implement required methods
|
108 |
+
|
109 |
+
swarm = CustomSwarm(openai_api_key)
|
110 |
+
swarm.run_swarms(objective)
|
111 |
+
```
|
112 |
+
|
113 |
+
## Conclusion
|
114 |
+
|
115 |
+
This Swarm Architecture design provides a scalable and flexible system for building swarm intelligence models. The plug-and-play design allows users to easily use pre-configured swarms or create their own custom swarms.
|
116 |
+
|
117 |
+
|
118 |
+
# Swarming Architectures
|
119 |
+
Sure, below are five different swarm architectures with their base requirements and an abstract class that processes these components:
|
120 |
+
|
121 |
+
1. **Hierarchical Swarm**: This architecture is characterized by a boss/worker relationship. The boss node takes high-level decisions and delegates tasks to the worker nodes. The worker nodes perform tasks and report back to the boss node.
|
122 |
+
- Requirements: Boss node (can be a large language model), worker nodes (can be smaller language models), and a task queue for task management.
|
123 |
+
|
124 |
+
2. **Homogeneous Swarm**: In this architecture, all nodes in the swarm are identical and contribute equally to problem-solving. Each node has the same capabilities.
|
125 |
+
- Requirements: Homogeneous nodes (can be language models of the same size), communication protocol for nodes to share information.
|
126 |
+
|
127 |
+
3. **Heterogeneous Swarm**: This architecture contains different types of nodes, each with its specific capabilities. This diversity can lead to more robust problem-solving.
|
128 |
+
- Requirements: Different types of nodes (can be different types and sizes of language models), a communication protocol, and a mechanism to delegate tasks based on node capabilities.
|
129 |
+
|
130 |
+
4. **Competitive Swarm**: In this architecture, nodes compete with each other to find the best solution. The system may use a selection process to choose the best solutions.
|
131 |
+
- Requirements: Nodes (can be language models), a scoring mechanism to evaluate node performance, a selection mechanism.
|
132 |
+
|
133 |
+
5. **Cooperative Swarm**: In this architecture, nodes work together and share information to find solutions. The focus is on cooperation rather than competition.
|
134 |
+
- Requirements: Nodes (can be language models), a communication protocol, a consensus mechanism to agree on solutions.
|
135 |
+
|
136 |
+
|
137 |
+
6. **Grid-based Swarm**: This architecture positions agents on a grid, where they can only interact with their neighbors. This is useful for simulations, especially in fields like ecology or epidemiology.
|
138 |
+
- Requirements: Agents (can be language models), a grid structure, and a neighborhood definition (i.e., how to identify neighboring agents).
|
139 |
+
|
140 |
+
7. **Particle Swarm Optimization (PSO) Swarm**: In this architecture, each agent represents a potential solution to an optimization problem. Agents move in the solution space based on their own and their neighbors' past performance. PSO is especially useful for continuous numerical optimization problems.
|
141 |
+
- Requirements: Agents (each representing a solution), a definition of the solution space, an evaluation function to rate the solutions, a mechanism to adjust agent positions based on performance.
|
142 |
+
|
143 |
+
8. **Ant Colony Optimization (ACO) Swarm**: Inspired by ant behavior, this architecture has agents leave a pheromone trail that other agents follow, reinforcing the best paths. It's useful for problems like the traveling salesperson problem.
|
144 |
+
- Requirements: Agents (can be language models), a representation of the problem space, a pheromone updating mechanism.
|
145 |
+
|
146 |
+
9. **Genetic Algorithm (GA) Swarm**: In this architecture, agents represent potential solutions to a problem. They can 'breed' to create new solutions and can undergo 'mutations'. GA swarms are good for search and optimization problems.
|
147 |
+
- Requirements: Agents (each representing a potential solution), a fitness function to evaluate solutions, a crossover mechanism to breed solutions, and a mutation mechanism.
|
148 |
+
|
149 |
+
10. **Stigmergy-based Swarm**: In this architecture, agents communicate indirectly by modifying the environment, and other agents react to such modifications. It's a decentralized method of coordinating tasks.
|
150 |
+
- Requirements: Agents (can be language models), an environment that agents can modify, a mechanism for agents to perceive environment changes.
|
151 |
+
|
152 |
+
These architectures all have unique features and requirements, but they share the need for agents (often implemented as language models) and a mechanism for agents to communicate or interact, whether it's directly through messages, indirectly through the environment, or implicitly through a shared solution space. Some also require specific data structures, like a grid or problem space, and specific algorithms, like for evaluating solutions or updating agent positions.
|
docs/corporate/distribution.md
ADDED
@@ -0,0 +1,469 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
# Swarms Monetization Strategy
|
4 |
+
|
5 |
+
This strategy includes a variety of business models, potential revenue streams, cashflow structures, and customer identification methods. Let's explore these further.
|
6 |
+
|
7 |
+
## Business Models
|
8 |
+
|
9 |
+
1. **Platform as a Service (PaaS):** Provide the Swarms AI platform on a subscription basis, charged monthly or annually. This could be tiered based on usage and access to premium features.
|
10 |
+
|
11 |
+
2. **API Usage-based Pricing:** Charge customers based on their usage of the Swarms API. The more requests made, the higher the fee.
|
12 |
+
|
13 |
+
3. **Managed Services:** Offer complete end-to-end solutions where you manage the entire AI infrastructure for the clients. This could be on a contract basis with a recurring fee.
|
14 |
+
|
15 |
+
4. **Training and Certification:** Provide Swarms AI training and certification programs for interested developers and businesses. These could be monetized as separate courses or subscription-based access.
|
16 |
+
|
17 |
+
5. **Partnerships:** Collaborate with large enterprises and offer them dedicated Swarm AI services. These could be performance-based contracts, ensuring a mutually beneficial relationship.
|
18 |
+
|
19 |
+
6. **Data as a Service (DaaS):** Leverage the data generated by Swarms for insights and analytics, providing valuable business intelligence to clients.
|
20 |
+
|
21 |
+
## Potential Revenue Streams
|
22 |
+
|
23 |
+
1. **Subscription Fees:** This would be the main revenue stream from providing the Swarms platform as a service.
|
24 |
+
|
25 |
+
2. **Usage Fees:** Additional revenue can come from usage fees for businesses that have high demand for Swarms API.
|
26 |
+
|
27 |
+
3. **Contract Fees:** From offering managed services and bespoke solutions to businesses.
|
28 |
+
|
29 |
+
4. **Training Fees:** Revenue from providing training and certification programs to developers and businesses.
|
30 |
+
|
31 |
+
5. **Partnership Contracts:** Large-scale projects with enterprises, involving dedicated Swarm AI services, could provide substantial income.
|
32 |
+
|
33 |
+
6. **Data Insights:** Revenue from selling valuable business intelligence derived from Swarm's aggregated and anonymized data.
|
34 |
+
|
35 |
+
## Potential Customers
|
36 |
+
|
37 |
+
1. **Businesses Across Sectors:** Any business seeking to leverage AI for automation, efficiency, and data insights could be a potential customer. This includes sectors like finance, eCommerce, logistics, healthcare, and more.
|
38 |
+
|
39 |
+
2. **Developers:** Both freelance and those working in organizations could use Swarms to enhance their projects and services.
|
40 |
+
|
41 |
+
3. **Enterprises:** Large enterprises looking to automate and optimize their operations could greatly benefit from Swarms.
|
42 |
+
|
43 |
+
4. **Educational Institutions:** Universities and research institutions could leverage Swarms for research and teaching purposes.
|
44 |
+
|
45 |
+
## Roadmap
|
46 |
+
|
47 |
+
1. **Landing Page Creation:** Develop a dedicated product page on apac.ai for Swarms.
|
48 |
+
|
49 |
+
2. **Hosted Swarms API:** Launch a cloud-based Swarms API service. It should be highly reliable, with robust documentation to attract daily users.
|
50 |
+
|
51 |
+
3. **Consumer and Enterprise Subscription Service:** Launch a comprehensive subscription service on The Domain. This would provide users with access to a wide array of APIs and data streams.
|
52 |
+
|
53 |
+
4. **Dedicated Capacity Deals:** Partner with large enterprises to offer them dedicated Swarm AI solutions for automating their operations.
|
54 |
+
|
55 |
+
5. **Enterprise Partnerships:** Develop partnerships with large enterprises for extensive contract-based projects.
|
56 |
+
|
57 |
+
6. **Integration with Collaboration Platforms:** Develop Swarms bots for platforms like Discord and Slack, charging users a subscription fee for access.
|
58 |
+
|
59 |
+
7. **Personal Data Instances:** Offer users dedicated instances of all their data that the Swarm can query as needed.
|
60 |
+
|
61 |
+
8. **Browser Extension:** Develop a browser extension that integrates with the Swarms platform, offering users a more seamless experience.
|
62 |
+
|
63 |
+
Remember, customer satisfaction and a value-centric approach are at the core of any successful monetization strategy. It's essential to continuously iterate and improve the product based on customer feedback and evolving market needs.
|
64 |
+
|
65 |
+
----
|
66 |
+
|
67 |
+
# Other ideas
|
68 |
+
|
69 |
+
1. **Platform as a Service (PaaS):** Create a cloud-based platform that allows users to build, run, and manage applications without the complexity of maintaining the infrastructure. You could charge users a subscription fee for access to the platform and provide different pricing tiers based on usage levels. This could be an attractive solution for businesses that do not have the capacity to build or maintain their own swarm intelligence solutions.
|
70 |
+
|
71 |
+
2. **Professional Services:** Offer consultancy and implementation services to businesses looking to utilize the Swarm technology. This could include assisting with integration into existing systems, offering custom development services, or helping customers to build specific solutions using the framework.
|
72 |
+
|
73 |
+
3. **Education and Training:** Create a certification program for developers or companies looking to become proficient with the Swarms framework. This could be sold as standalone courses, or bundled with other services.
|
74 |
+
|
75 |
+
4. **Managed Services:** Some companies may prefer to outsource the management of their Swarm-based systems. A managed services solution could take care of all the technical aspects, from hosting the solution to ensuring it runs smoothly, allowing the customer to focus on their core business.
|
76 |
+
|
77 |
+
5. **Data Analysis and Insights:** Swarm intelligence can generate valuable data and insights. By anonymizing and aggregating this data, you could provide industry reports, trend analysis, and other valuable insights to businesses.
|
78 |
+
|
79 |
+
As for the type of platform, Swarms can be offered as a cloud-based solution given its scalability and flexibility. This would also allow you to apply a SaaS/PaaS type monetization model, which provides recurring revenue.
|
80 |
+
|
81 |
+
Potential customers could range from small to large enterprises in various sectors such as logistics, eCommerce, finance, and technology, who are interested in leveraging artificial intelligence and machine learning for complex problem solving, optimization, and decision-making.
|
82 |
+
|
83 |
+
**Product Brief Monetization Strategy:**
|
84 |
+
|
85 |
+
Product Name: Swarms.AI Platform
|
86 |
+
|
87 |
+
Product Description: A cloud-based AI and ML platform harnessing the power of swarm intelligence.
|
88 |
+
|
89 |
+
1. **Platform as a Service (PaaS):** Offer tiered subscription plans (Basic, Premium, Enterprise) to accommodate different usage levels and business sizes.
|
90 |
+
|
91 |
+
2. **Professional Services:** Offer consultancy and custom development services to tailor the Swarms solution to the specific needs of the business.
|
92 |
+
|
93 |
+
3. **Education and Training:** Launch an online Swarms.AI Academy with courses and certifications for developers and businesses.
|
94 |
+
|
95 |
+
4. **Managed Services:** Provide a premium, fully-managed service offering that includes hosting, maintenance, and 24/7 support.
|
96 |
+
|
97 |
+
5. **Data Analysis and Insights:** Offer industry reports and customized insights generated from aggregated and anonymized Swarm data.
|
98 |
+
|
99 |
+
Potential Customers: Enterprises in sectors such as logistics, eCommerce, finance, and technology. This can be sold globally, provided there's an internet connection.
|
100 |
+
|
101 |
+
Marketing Channels: Online marketing (SEO, Content Marketing, Social Media), Partnerships with tech companies, Direct Sales to Enterprises.
|
102 |
+
|
103 |
+
This strategy is designed to provide multiple revenue streams, while ensuring the Swarms.AI platform is accessible and useful to a range of potential customers.
|
104 |
+
|
105 |
+
1. **AI Solution as a Service:** By offering the Swarms framework as a service, businesses can access and utilize the power of multiple LLM agents without the need to maintain the infrastructure themselves. Subscription can be tiered based on usage and additional features.
|
106 |
+
|
107 |
+
2. **Integration and Custom Development:** Offer integration services to businesses wanting to incorporate the Swarms framework into their existing systems. Also, you could provide custom development for businesses with specific needs not met by the standard framework.
|
108 |
+
|
109 |
+
3. **Training and Certification:** Develop an educational platform offering courses, webinars, and certifications on using the Swarms framework. This can serve both developers seeking to broaden their skills and businesses aiming to train their in-house teams.
|
110 |
+
|
111 |
+
4. **Managed Swarms Solutions:** For businesses that prefer to outsource their AI needs, provide a complete solution which includes the development, maintenance, and continuous improvement of swarms-based applications.
|
112 |
+
|
113 |
+
5. **Data Analytics Services:** Leveraging the aggregated insights from the AI swarms, you could offer data analytics services. Businesses can use these insights to make informed decisions and predictions.
|
114 |
+
|
115 |
+
**Type of Platform:**
|
116 |
+
|
117 |
+
Cloud-based platform or Software as a Service (SaaS) will be a suitable model. It offers accessibility, scalability, and ease of updates.
|
118 |
+
|
119 |
+
**Target Customers:**
|
120 |
+
|
121 |
+
The technology can be beneficial for businesses across sectors like eCommerce, technology, logistics, finance, healthcare, and education, among others.
|
122 |
+
|
123 |
+
**Product Brief Monetization Strategy:**
|
124 |
+
|
125 |
+
Product Name: Swarms.AI
|
126 |
+
|
127 |
+
1. **AI Solution as a Service:** Offer different tiered subscriptions (Standard, Premium, and Enterprise) each with varying levels of usage and features.
|
128 |
+
|
129 |
+
2. **Integration and Custom Development:** Offer custom development and integration services, priced based on the scope and complexity of the project.
|
130 |
+
|
131 |
+
3. **Training and Certification:** Launch the Swarms.AI Academy with courses and certifications, available for a fee.
|
132 |
+
|
133 |
+
4. **Managed Swarms Solutions:** Offer fully managed solutions tailored to business needs, priced based on scope and service level agreements.
|
134 |
+
|
135 |
+
5. **Data Analytics Services:** Provide insightful reports and data analyses, which can be purchased on a one-off basis or through a subscription.
|
136 |
+
|
137 |
+
By offering a variety of services and payment models, Swarms.AI will be able to cater to a diverse range of business needs, from small start-ups to large enterprises. Marketing channels would include digital marketing, partnerships with technology companies, presence in tech events, and direct sales to targeted industries.
|
138 |
+
|
139 |
+
|
140 |
+
|
141 |
+
# Roadmap
|
142 |
+
|
143 |
+
* Create a landing page for swarms apac.ai/product/swarms
|
144 |
+
|
145 |
+
* Create Hosted Swarms API for anybody to just use without need for mega gpu infra, charge usage based pricing. Prerequisites for success => Swarms has to be extremely reliable + we need world class documentation and many daily users => how do we get many daily users? We provide a seamless and fluid experience, how do we create a seamless and fluid experience? We write good code that is modular, provides feedback to the user in times of distress, and ultimately accomplishes the user's tasks.
|
146 |
+
|
147 |
+
* Hosted consumer and enterprise subscription as a service on The Domain, where users can interact with 1000s of APIs and ingest 1000s of different data streams.
|
148 |
+
|
149 |
+
* Hosted dedicated capacity deals with mega enterprises on automating many operations with Swarms for monthly subscription 300,000+$
|
150 |
+
|
151 |
+
* Partnerships with enterprises, massive contracts with performance based fee
|
152 |
+
|
153 |
+
* Have discord bot and or slack bot with users personal data, charge subscription + browser extension
|
154 |
+
|
155 |
+
* each user gets a dedicated ocean instance of all their data so the swarm can query it as needed.
|
156 |
+
|
157 |
+
|
158 |
+
|
159 |
+
|
160 |
+
---
|
161 |
+
---
|
162 |
+
|
163 |
+
|
164 |
+
# Swarms Monetization Strategy: A Revolutionary AI-powered Future
|
165 |
+
|
166 |
+
Swarms is a powerful AI platform leveraging the transformative potential of Swarm Intelligence. Our ambition is to monetize this groundbreaking technology in ways that generate significant cashflow while providing extraordinary value to our customers.
|
167 |
+
|
168 |
+
Here we outline our strategic monetization pathways and provide a roadmap that plots our course to future success.
|
169 |
+
|
170 |
+
---
|
171 |
+
|
172 |
+
## I. Business Models
|
173 |
+
|
174 |
+
1. **Platform as a Service (PaaS):** We provide the Swarms platform as a service, billed on a monthly or annual basis. Subscriptions can range from $50 for basic access, to $500+ for premium features and extensive usage.
|
175 |
+
|
176 |
+
2. **API Usage-based Pricing:** Customers are billed according to their use of the Swarms API. Starting at $0.01 per request, this creates a cashflow model that rewards extensive platform usage.
|
177 |
+
|
178 |
+
3. **Managed Services:** We offer end-to-end solutions, managing clients' entire AI infrastructure. Contract fees start from $100,000 per month, offering both a sustainable cashflow and considerable savings for our clients.
|
179 |
+
|
180 |
+
4. **Training and Certification:** A Swarms AI training and certification program is available for developers and businesses. Course costs can range from $200 to $2,000, depending on course complexity and duration.
|
181 |
+
|
182 |
+
5. **Partnerships:** We forge collaborations with large enterprises, offering dedicated Swarm AI services. These performance-based contracts start from $1,000,000, creating a potentially lucrative cashflow stream.
|
183 |
+
|
184 |
+
6. **Data as a Service (DaaS):** Swarms generated data are mined for insights and analytics, with business intelligence reports offered from $500 each.
|
185 |
+
|
186 |
+
---
|
187 |
+
|
188 |
+
## II. Potential Revenue Streams
|
189 |
+
|
190 |
+
1. **Subscription Fees:** From $50 to $500+ per month for platform access.
|
191 |
+
|
192 |
+
2. **Usage Fees:** From $0.01 per API request, generating income from high platform usage.
|
193 |
+
|
194 |
+
3. **Contract Fees:** Starting from $100,000 per month for managed services.
|
195 |
+
|
196 |
+
4. **Training Fees:** From $200 to $2,000 for individual courses or subscription access.
|
197 |
+
|
198 |
+
5. **Partnership Contracts:** Contracts starting from $100,000, offering major income potential.
|
199 |
+
|
200 |
+
6. **Data Insights:** Business intelligence reports starting from $500.
|
201 |
+
|
202 |
+
---
|
203 |
+
|
204 |
+
## III. Potential Customers
|
205 |
+
|
206 |
+
1. **Businesses Across Sectors:** Our offerings cater to businesses across finance, eCommerce, logistics, healthcare, and more.
|
207 |
+
|
208 |
+
2. **Developers:** Both freelancers and organization-based developers can leverage Swarms for their projects.
|
209 |
+
|
210 |
+
3. **Enterprises:** Swarms offers large enterprises solutions for optimizing operations.
|
211 |
+
|
212 |
+
4. **Educational Institutions:** Universities and research institutions can use Swarms for research and teaching.
|
213 |
+
|
214 |
+
---
|
215 |
+
|
216 |
+
## IV. Roadmap
|
217 |
+
|
218 |
+
1. **Landing Page Creation:** Develop a dedicated Swarms product page on apac.ai.
|
219 |
+
|
220 |
+
2. **Hosted Swarms API:** Launch a reliable, well-documented cloud-based Swarms API service.
|
221 |
+
|
222 |
+
3. **Consumer and Enterprise Subscription Service:** Launch an extensive subscription service on The Domain, providing wide-ranging access to APIs and data streams.
|
223 |
+
|
224 |
+
4. **Dedicated Capacity Deals:** Offer large enterprises dedicated Swarm AI solutions, starting from $300,000 monthly subscription.
|
225 |
+
|
226 |
+
5. **Enterprise Partnerships:** Develop performance-based contracts with large enterprises.
|
227 |
+
|
228 |
+
6. **Integration with Collaboration Platforms:** Develop Swarms bots for platforms like Discord and Slack, charging a subscription fee for access.
|
229 |
+
|
230 |
+
7. **Personal Data Instances:** Offer users dedicated data instances that the Swarm can query as needed.
|
231 |
+
|
232 |
+
8. **Browser Extension:** Develop a browser extension that integrates with the Swarms platform for seamless user experience.
|
233 |
+
|
234 |
+
---
|
235 |
+
|
236 |
+
Our North Star remains customer satisfaction and value provision.
|
237 |
+
As we embark on this journey, we continuously refine our product based on customer feedback and evolving market needs, ensuring we lead in the age of AI-driven solutions.
|
238 |
+
|
239 |
+
## **Platform Distribution Strategy for Swarms**
|
240 |
+
|
241 |
+
*Note: This strategy aims to diversify the presence of 'Swarms' across various platforms and mediums while focusing on monetization and value creation for its users.
|
242 |
+
|
243 |
+
---
|
244 |
+
|
245 |
+
### **1. Framework:**
|
246 |
+
|
247 |
+
#### **Objective:**
|
248 |
+
To offer Swarms as an integrated solution within popular frameworks to ensure that developers and businesses can seamlessly incorporate its functionalities.
|
249 |
+
|
250 |
+
#### **Strategy:**
|
251 |
+
|
252 |
+
* **Language/Framework Integration:**
|
253 |
+
* Target popular frameworks like Django, Flask for Python, Express.js for Node, etc.
|
254 |
+
* Create SDKs or plugins for easy integration.
|
255 |
+
|
256 |
+
* **Monetization:**
|
257 |
+
* Freemium Model: Offer basic integration for free, and charge for additional features or advanced integrations.
|
258 |
+
* Licensing: Allow businesses to purchase licenses for enterprise-level integrations.
|
259 |
+
|
260 |
+
* **Promotion:**
|
261 |
+
* Engage in partnerships with popular online coding platforms like Udemy, Coursera, etc., offering courses and tutorials on integrating Swarms.
|
262 |
+
* Host webinars and write technical blogs to promote the integration benefits.
|
263 |
+
|
264 |
+
---
|
265 |
+
|
266 |
+
### **2. Paid API:**
|
267 |
+
|
268 |
+
#### **Objective:**
|
269 |
+
To provide a scalable solution for developers and businesses that want direct access to Swarms' functionalities without integrating the entire framework.
|
270 |
+
|
271 |
+
#### **Strategy:**
|
272 |
+
|
273 |
+
* **API Endpoints:**
|
274 |
+
* Offer various endpoints catering to different functionalities.
|
275 |
+
* Maintain robust documentation to ensure ease of use.
|
276 |
+
|
277 |
+
* **Monetization:**
|
278 |
+
* Usage-based Pricing: Charge based on the number of API calls.
|
279 |
+
* Subscription Tiers: Provide tiered packages based on usage limits and advanced features.
|
280 |
+
|
281 |
+
* **Promotion:**
|
282 |
+
* List on API marketplaces like RapidAPI.
|
283 |
+
* Engage in SEO to make the API documentation discoverable.
|
284 |
+
|
285 |
+
---
|
286 |
+
|
287 |
+
### **3. Domain Hosted:**
|
288 |
+
|
289 |
+
#### **Objective:**
|
290 |
+
To provide a centralized web platform where users can directly access and engage with Swarms' offerings.
|
291 |
+
|
292 |
+
#### **Strategy:**
|
293 |
+
|
294 |
+
* **User-Friendly Interface:**
|
295 |
+
* Ensure a seamless user experience with intuitive design.
|
296 |
+
* Incorporate features like real-time chat support, tutorials, and an FAQ section.
|
297 |
+
|
298 |
+
* **Monetization:**
|
299 |
+
* Subscription Model: Offer monthly/annual subscriptions for premium features.
|
300 |
+
* Affiliate Marketing: Partner with related tech products/services and earn through referrals.
|
301 |
+
|
302 |
+
* **Promotion:**
|
303 |
+
* Invest in PPC advertising on platforms like Google Ads.
|
304 |
+
* Engage in content marketing, targeting keywords related to Swarms' offerings.
|
305 |
+
|
306 |
+
---
|
307 |
+
|
308 |
+
### **4. Build Your Own (No-Code Platform):**
|
309 |
+
|
310 |
+
#### **Objective:**
|
311 |
+
To cater to the non-developer audience, allowing them to leverage Swarms' features without any coding expertise.
|
312 |
+
|
313 |
+
#### **Strategy:**
|
314 |
+
|
315 |
+
* **Drag-and-Drop Interface:**
|
316 |
+
* Offer customizable templates.
|
317 |
+
* Ensure integration with popular platforms and apps.
|
318 |
+
|
319 |
+
* **Monetization:**
|
320 |
+
* Freemium Model: Offer basic features for free, and charge for advanced functionalities.
|
321 |
+
* Marketplace for Plugins: Allow third-party developers to sell their plugins/extensions on the platform.
|
322 |
+
|
323 |
+
* **Promotion:**
|
324 |
+
* Partner with no-code communities and influencers.
|
325 |
+
* Offer promotions and discounts to early adopters.
|
326 |
+
|
327 |
+
---
|
328 |
+
|
329 |
+
### **5. Marketplace for the No-Code Platform:**
|
330 |
+
|
331 |
+
#### **Objective:**
|
332 |
+
To create an ecosystem where third-party developers can contribute, and users can enhance their Swarms experience.
|
333 |
+
|
334 |
+
#### **Strategy:**
|
335 |
+
|
336 |
+
* **Open API for Development:**
|
337 |
+
* Offer robust documentation and developer support.
|
338 |
+
* Ensure a strict quality check for marketplace additions.
|
339 |
+
|
340 |
+
* **Monetization:**
|
341 |
+
* Revenue Sharing: Take a percentage cut from third-party sales.
|
342 |
+
* Featured Listings: Charge developers for premium listings.
|
343 |
+
|
344 |
+
* **Promotion:**
|
345 |
+
* Host hackathons and competitions to boost developer engagement.
|
346 |
+
* Promote top plugins/extensions through email marketing and on the main platform.
|
347 |
+
|
348 |
+
---
|
349 |
+
|
350 |
+
### **Future Outlook & Expansion:**
|
351 |
+
|
352 |
+
* **Hosted Dedicated Capacity:** Hosted dedicated capacity deals for enterprises starting at 399,999$
|
353 |
+
* **Decentralized Free Peer to peer endpoint hosted on The Grid:** Hosted endpoint by the people for the people.
|
354 |
+
* **Browser Extenision:** Athena browser extension for deep browser automation, subscription, usage,
|
355 |
+
|
356 |
+
|
357 |
+
* **Mobile Application:** Develop a mobile app version for Swarms to tap into the vast mobile user base.
|
358 |
+
* **Global Expansion:** Localize the platform for non-English speaking regions to tap into global markets.
|
359 |
+
* **Continuous Learning:** Regularly collect user feedback and iterate on the product features.
|
360 |
+
|
361 |
+
---
|
362 |
+
|
363 |
+
|
364 |
+
|
365 |
+
### **50 Creative Distribution Platforms for Swarms**
|
366 |
+
|
367 |
+
1. **E-commerce Integrations:** Platforms like Shopify, WooCommerce, where Swarms can add value to sellers.
|
368 |
+
|
369 |
+
2. **Web Browser Extensions:** Chrome, Firefox, and Edge extensions that bring Swarms features directly to users.
|
370 |
+
|
371 |
+
3. **Podcasting Platforms:** Swarms-themed content on platforms like Spotify, Apple Podcasts to reach aural learners.
|
372 |
+
|
373 |
+
4. **Virtual Reality (VR) Platforms:** Integration with VR experiences on Oculus or Viveport.
|
374 |
+
|
375 |
+
5. **Gaming Platforms:** Tools or plugins for game developers on Steam, Epic Games.
|
376 |
+
|
377 |
+
6. **Decentralized Platforms:** Using blockchain, create decentralized apps (DApps) versions of Swarms.
|
378 |
+
|
379 |
+
7. **Chat Applications:** Integrate with popular messaging platforms like WhatsApp, Telegram, Slack.
|
380 |
+
|
381 |
+
8. **AI Assistants:** Integration with Siri, Alexa, Google Assistant to provide Swarms functionalities via voice commands.
|
382 |
+
|
383 |
+
9. **Freelancing Websites:** Offer tools or services for freelancers on platforms like Upwork, Fiverr.
|
384 |
+
|
385 |
+
10. **Online Forums:** Platforms like Reddit, Quora, where users can discuss or access Swarms.
|
386 |
+
|
387 |
+
11. **Educational Platforms:** Sites like Khan Academy, Udacity where Swarms can enhance learning experiences.
|
388 |
+
|
389 |
+
12. **Digital Art Platforms:** Integrate with platforms like DeviantArt, Behance.
|
390 |
+
|
391 |
+
13. **Open-source Repositories:** Hosting Swarms on GitHub, GitLab, Bitbucket with open-source plugins.
|
392 |
+
|
393 |
+
14. **Augmented Reality (AR) Apps:** Create AR experiences powered by Swarms.
|
394 |
+
|
395 |
+
15. **Smart Home Devices:** Integrate Swarms' functionalities into smart home devices.
|
396 |
+
|
397 |
+
16. **Newsletters:** Platforms like Substack, where Swarms insights can be shared.
|
398 |
+
|
399 |
+
17. **Interactive Kiosks:** In malls, airports, and other public places.
|
400 |
+
|
401 |
+
18. **IoT Devices:** Incorporate Swarms in devices like smart fridges, smartwatches.
|
402 |
+
|
403 |
+
19. **Collaboration Tools:** Platforms like Trello, Notion, offering Swarms-enhanced productivity.
|
404 |
+
|
405 |
+
20. **Dating Apps:** An AI-enhanced matching algorithm powered by Swarms.
|
406 |
+
|
407 |
+
21. **Music Platforms:** Integrate with Spotify, SoundCloud for music-related AI functionalities.
|
408 |
+
|
409 |
+
22. **Recipe Websites:** Platforms like AllRecipes, Tasty with AI-recommended recipes.
|
410 |
+
|
411 |
+
23. **Travel & Hospitality:** Integrate with platforms like Airbnb, Tripadvisor for AI-based recommendations.
|
412 |
+
|
413 |
+
24. **Language Learning Apps:** Duolingo, Rosetta Stone integrations.
|
414 |
+
|
415 |
+
25. **Virtual Events Platforms:** Websites like Hopin, Zoom where Swarms can enhance the virtual event experience.
|
416 |
+
|
417 |
+
26. **Social Media Management:** Tools like Buffer, Hootsuite with AI insights by Swarms.
|
418 |
+
|
419 |
+
27. **Fitness Apps:** Platforms like MyFitnessPal, Strava with AI fitness insights.
|
420 |
+
|
421 |
+
28. **Mental Health Apps:** Integration into apps like Calm, Headspace for AI-driven wellness.
|
422 |
+
|
423 |
+
29. **E-books Platforms:** Amazon Kindle, Audible with AI-enhanced reading experiences.
|
424 |
+
|
425 |
+
30. **Sports Analysis Tools:** Websites like ESPN, Sky Sports where Swarms can provide insights.
|
426 |
+
|
427 |
+
31. **Financial Tools:** Integration into platforms like Mint, Robinhood for AI-driven financial advice.
|
428 |
+
|
429 |
+
32. **Public Libraries:** Digital platforms of public libraries for enhanced reading experiences.
|
430 |
+
|
431 |
+
33. **3D Printing Platforms:** Websites like Thingiverse, Shapeways with AI customization.
|
432 |
+
|
433 |
+
34. **Meme Platforms:** Websites like Memedroid, 9GAG where Swarms can suggest memes.
|
434 |
+
|
435 |
+
35. **Astronomy Apps:** Platforms like Star Walk, NASA's Eyes with AI-driven space insights.
|
436 |
+
|
437 |
+
36. **Weather Apps:** Integration into Weather.com, AccuWeather for predictive analysis.
|
438 |
+
|
439 |
+
37. **Sustainability Platforms:** Websites like Ecosia, GoodGuide with AI-driven eco-tips.
|
440 |
+
|
441 |
+
38. **Fashion Apps:** Platforms like ASOS, Zara with AI-based style recommendations.
|
442 |
+
|
443 |
+
39. **Pet Care Apps:** Integration into PetSmart, Chewy for AI-driven pet care tips.
|
444 |
+
|
445 |
+
40. **Real Estate Platforms:** Websites like Zillow, Realtor with AI-enhanced property insights.
|
446 |
+
|
447 |
+
41. **DIY Platforms:** Websites like Instructables, DIY.org with AI project suggestions.
|
448 |
+
|
449 |
+
42. **Genealogy Platforms:** Ancestry, MyHeritage with AI-driven family tree insights.
|
450 |
+
|
451 |
+
43. **Car Rental & Sale Platforms:** Integration into AutoTrader, Turo for AI-driven vehicle suggestions.
|
452 |
+
|
453 |
+
44. **Wedding Planning Websites:** Platforms like Zola, The Knot with AI-driven planning.
|
454 |
+
|
455 |
+
45. **Craft Platforms:** Websites like Etsy, Craftsy with AI-driven craft suggestions.
|
456 |
+
|
457 |
+
46. **Gift Recommendation Platforms:** AI-driven gift suggestions for websites like Gifts.com.
|
458 |
+
|
459 |
+
47. **Study & Revision Platforms:** Websites like Chegg, Quizlet with AI-driven study guides.
|
460 |
+
|
461 |
+
48. **Local Business Directories:** Yelp, Yellow Pages with AI-enhanced reviews.
|
462 |
+
|
463 |
+
49. **Networking Platforms:** LinkedIn, Meetup with AI-driven connection suggestions.
|
464 |
+
|
465 |
+
50. **Lifestyle Magazines' Digital Platforms:** Websites like Vogue, GQ with AI-curated fashion and lifestyle insights.
|
466 |
+
|
467 |
+
---
|
468 |
+
|
469 |
+
*Endnote: Leveraging these diverse platforms ensures that Swarms becomes an integral part of multiple ecosystems, enhancing its visibility and user engagement.*
|
docs/corporate/failures.md
ADDED
@@ -0,0 +1,104 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Failure Root Cause Analysis for Langchain
|
2 |
+
|
3 |
+
## 1. Introduction
|
4 |
+
|
5 |
+
Langchain is an open-source software that has gained massive popularity in the artificial intelligence ecosystem, serving as a tool for connecting different language models, especially GPT based models. However, despite its popularity and substantial investment, Langchain has shown several weaknesses that hinder its use in various projects, especially in complex and large-scale implementations. This document provides an analysis of the identified issues and proposes potential mitigation strategies.
|
6 |
+
|
7 |
+
## 2. Analysis of Weaknesses
|
8 |
+
|
9 |
+
### 2.1 Tool Lock-in
|
10 |
+
|
11 |
+
Langchain tends to enforce tool lock-in, which could prove detrimental for developers. Its design heavily relies on specific workflows and architectures, which greatly limits flexibility. Developers may find themselves restricted to certain methodologies, impeding their freedom to implement custom solutions or integrate alternative tools.
|
12 |
+
|
13 |
+
#### Mitigation
|
14 |
+
|
15 |
+
An ideal AI framework should not be restrictive but should instead offer flexibility for users to integrate any agent on any architecture. Adopting an open architecture that allows for seamless interaction between various agents and workflows can address this issue.
|
16 |
+
|
17 |
+
### 2.2 Outdated Workflows
|
18 |
+
|
19 |
+
Langchain's current workflows and prompt engineering, mainly based on InstructGPT, are out of date, especially compared to newer models like ChatGPT/GPT-4.
|
20 |
+
|
21 |
+
#### Mitigation
|
22 |
+
|
23 |
+
Keeping up with the latest AI models and workflows is crucial. The framework should have a mechanism for regular updates and seamless integration of up-to-date models and workflows.
|
24 |
+
|
25 |
+
### 2.3 Debugging Difficulties
|
26 |
+
|
27 |
+
Debugging in Langchain is reportedly very challenging, even with verbose output enabled, making it hard to determine what is happening under the hood.
|
28 |
+
|
29 |
+
#### Mitigation
|
30 |
+
|
31 |
+
The introduction of a robust debugging and logging system would help users understand the internals of the models, thus enabling them to pinpoint and rectify issues more effectively.
|
32 |
+
|
33 |
+
### 2.4 Limited Customization
|
34 |
+
|
35 |
+
Langchain makes it extremely hard to deviate from documented workflows. This becomes a challenge when developers need custom workflows for their specific use-cases.
|
36 |
+
|
37 |
+
#### Mitigation
|
38 |
+
|
39 |
+
An ideal framework should support custom workflows and allow developers to hack and adjust the framework according to their needs.
|
40 |
+
|
41 |
+
### 2.5 Documentation
|
42 |
+
|
43 |
+
Langchain's documentation is reportedly missing relevant details, making it difficult for users to understand the differences between various agent types, among other things.
|
44 |
+
|
45 |
+
#### Mitigation
|
46 |
+
|
47 |
+
Providing detailed and comprehensive documentation, including examples, FAQs, and best practices, is crucial. This will help users understand the intricacies of the framework, making it easier for them to implement it in their projects.
|
48 |
+
|
49 |
+
### 2.6 Negative Influence on AI Ecosystem
|
50 |
+
|
51 |
+
The extreme popularity of Langchain seems to be warping the AI ecosystem to the point of causing harm, with other AI entities shifting their operations to align with Langchain's 'magic AI' approach.
|
52 |
+
|
53 |
+
#### Mitigation
|
54 |
+
|
55 |
+
It's essential for any widely adopted framework to promote healthy practices in the broader ecosystem. One approach could be promoting open dialogue, inviting criticism, and being open to change based on feedback.
|
56 |
+
|
57 |
+
## 3. Conclusion
|
58 |
+
|
59 |
+
While Langchain has made significant contributions to the AI landscape, these challenges hinder its potential. Addressing these issues will not only improve Langchain but also foster a healthier AI ecosystem. It's important to note that criticism, when approached constructively, can be a powerful tool for growth and innovation.
|
60 |
+
|
61 |
+
|
62 |
+
# List of weaknesses in gLangchain and Potential Mitigations
|
63 |
+
|
64 |
+
1. **Tool Lock-in**: Langchain encourages the use of specific tools, creating a lock-in problem with minimal benefits for developers.
|
65 |
+
|
66 |
+
*Mitigation Strategy*: Langchain should consider designing the architecture to be more versatile and allow for the inclusion of a variety of tools. An open architecture will provide developers with more freedom and customization options.
|
67 |
+
|
68 |
+
2. **Outdated Workflow**: The current workflow and prompt engineering of Langchain rely on outdated models like InstructGPT, which fall short compared to newer alternatives such as ChatGPT/GPT-4.
|
69 |
+
|
70 |
+
*Mitigation Strategy*: Regular updates and adaptation of more recent models should be integrated into the Langchain framework.
|
71 |
+
|
72 |
+
3. **Debugging Difficulty**: Debugging a Langchain error is a complicated task, even with verbose=True, leading to a discouraging developer experience.
|
73 |
+
|
74 |
+
*Mitigation Strategy*: Develop a comprehensive debugging tool or improve current debugging processes for clearer and more accessible error detection and resolution.
|
75 |
+
|
76 |
+
4. **Lack of Customizability**: Customizing workflows that are not documented in Langchain is quite challenging.
|
77 |
+
|
78 |
+
*Mitigation Strategy*: Improve documentation and provide guides on how to customize workflows to enhance developer flexibility.
|
79 |
+
|
80 |
+
5. **Poor Documentation**: Langchain's documentation misses key details that developers have to manually search for in the codebase.
|
81 |
+
|
82 |
+
*Mitigation Strategy*: Enhance and improve the documentation of Langchain to provide clarity for developers and make navigation easier.
|
83 |
+
|
84 |
+
6. **Harmful Ecosystem Influence**: Langchain's extreme popularity is influencing the AI ecosystem towards the workflows, potentially harming development and code clarity.
|
85 |
+
|
86 |
+
*Mitigation Strategy*: Encourage diverse and balanced adoption of AI tools in the ecosystem.
|
87 |
+
|
88 |
+
7. **Suboptimal Performances**: Langchain's performance is sometimes underwhelming, and there are no clear benefits in terms of performance or abstraction.
|
89 |
+
|
90 |
+
*Mitigation Strategy*: Enhance the performance optimization of Langchain. Benchmarking against other tools can also provide performance improvement insights.
|
91 |
+
|
92 |
+
8. **Rigid General Interface**: Langchain tries to do too many things, resulting in a rigid interface not suitable for practical use, especially in production.
|
93 |
+
|
94 |
+
*Mitigation Strategy*: Focus on core features and allow greater flexibility in the interface. Adopting a modular approach where developers can pick and choose the features they want could also be helpful.
|
95 |
+
|
96 |
+
9. **Leaky Abstraction Problem**: Langchain’s full-on framework approach has created a leaky abstraction problem leading to a disappointing developer experience.
|
97 |
+
|
98 |
+
*Mitigation Strategy*: Adopt a more balanced approach between a library and a framework. Provide a solid core feature set with the possibility to extend it according to the developers' needs.
|
99 |
+
|
100 |
+
10. **Excessive Focus on Third-party Services**: Langchain overly focuses on supporting every single third-party service at the expense of customizability and fine-tuning for actual applications.
|
101 |
+
|
102 |
+
*Mitigation Strategy*: Prioritize fine-tuning and customizability for developers, limiting the focus on third-party services unless they provide substantial value.
|
103 |
+
|
104 |
+
Remember, any mitigation strategy will need to be tailored to Langchain's particular circumstances and developer feedback. It's also important to consider potential trade-offs and unintended consequences when implementing these strategies.
|
docs/corporate/faq.md
ADDED
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
### FAQ on Swarm Intelligence and Multi-Agent Systems
|
2 |
+
|
3 |
+
#### What is an agent in the context of AI and swarm intelligence?
|
4 |
+
|
5 |
+
In artificial intelligence (AI), an agent refers to an LLM with some objective to accomplish.
|
6 |
+
|
7 |
+
In swarm intelligence, each agent interacts with other agents and possibly the environment to achieve complex collective behaviors or solve problems more efficiently than individual agents could on their own.
|
8 |
+
|
9 |
+
|
10 |
+
#### What do you need Swarms at all?
|
11 |
+
Individual agents are limited by a vast array of issues such as context window loss, single task execution, hallucination, and no collaboration.
|
12 |
+
|
13 |
+
|
14 |
+
#### How does a swarm work?
|
15 |
+
|
16 |
+
A swarm works through the principles of decentralized control, local interactions, and simple rules followed by each agent. Unlike centralized systems, where a single entity dictates the behavior of all components, in a swarm, each agent makes its own decisions based on local information and interactions with nearby agents. These local interactions lead to the emergence of complex, organized behaviors or solutions at the collective level, enabling the swarm to tackle tasks efficiently.
|
17 |
+
|
18 |
+
#### Why do you need more agents in a swarm?
|
19 |
+
|
20 |
+
More agents in a swarm can enhance its problem-solving capabilities, resilience, and efficiency. With more agents:
|
21 |
+
|
22 |
+
- **Diversity and Specialization**: The swarm can leverage a wider range of skills, knowledge, and perspectives, allowing for more creative and effective solutions to complex problems.
|
23 |
+
- **Scalability**: Adding more agents can increase the swarm's capacity to handle larger tasks or multiple tasks simultaneously.
|
24 |
+
- **Robustness**: A larger number of agents enhances the system's redundancy and fault tolerance, as the failure of a few agents has a minimal impact on the overall performance of the swarm.
|
25 |
+
|
26 |
+
#### Isn't it more expensive to use more agents?
|
27 |
+
|
28 |
+
While deploying more agents can initially increase costs, especially in terms of computational resources, hosting, and potentially API usage, there are several factors and strategies that can mitigate these expenses:
|
29 |
+
|
30 |
+
- **Efficiency at Scale**: Larger swarms can often solve problems more quickly or effectively, reducing the overall computational time and resources required.
|
31 |
+
- **Optimization and Caching**: Implementing optimizations and caching strategies can reduce redundant computations, lowering the workload on individual agents and the overall system.
|
32 |
+
- **Dynamic Scaling**: Utilizing cloud services that offer dynamic scaling can ensure you only pay for the resources you need when you need them, optimizing cost-efficiency.
|
33 |
+
|
34 |
+
#### Can swarms make decisions better than individual agents?
|
35 |
+
|
36 |
+
Yes, swarms can make better decisions than individual agents for several reasons:
|
37 |
+
|
38 |
+
- **Collective Intelligence**: Swarms combine the knowledge and insights of multiple agents, leading to more informed and well-rounded decision-making processes.
|
39 |
+
- **Error Correction**: The collaborative nature of swarms allows for error checking and correction among agents, reducing the likelihood of mistakes.
|
40 |
+
- **Adaptability**: Swarms are highly adaptable to changing environments or requirements, as the collective can quickly reorganize or shift strategies based on new information.
|
41 |
+
|
42 |
+
#### How do agents in a swarm communicate?
|
43 |
+
|
44 |
+
Communication in a swarm can vary based on the design and purpose of the system but generally involves either direct or indirect interactions:
|
45 |
+
|
46 |
+
- **Direct Communication**: Agents exchange information directly through messaging, signals, or other communication protocols designed for the system.
|
47 |
+
- **Indirect Communication**: Agents influence each other through the environment, a method known as stigmergy. Actions by one agent alter the environment, which in turn influences the behavior of other agents.
|
48 |
+
|
49 |
+
#### Are swarms only useful in computational tasks?
|
50 |
+
|
51 |
+
While swarms are often associated with computational tasks, their applications extend far beyond. Swarms can be utilized in:
|
52 |
+
|
53 |
+
- **Robotics**: Coordinating multiple robots for tasks like search and rescue, exploration, or surveillance.
|
54 |
+
- **Environmental Monitoring**: Using sensor networks to monitor pollution, wildlife, or climate conditions.
|
55 |
+
- **Social Sciences**: Modeling social behaviors or economic systems to understand complex societal dynamics.
|
56 |
+
- **Healthcare**: Coordinating care strategies in hospital settings or managing pandemic responses through distributed data analysis.
|
57 |
+
|
58 |
+
#### How do you ensure the security of a swarm system?
|
59 |
+
|
60 |
+
Security in swarm systems involves:
|
61 |
+
|
62 |
+
- **Encryption**: Ensuring all communications between agents are encrypted to prevent unauthorized access or manipulation.
|
63 |
+
- **Authentication**: Implementing strict authentication mechanisms to verify the identity of each agent in the swarm.
|
64 |
+
- **Resilience to Attacks**: Designing the swarm to continue functioning effectively even if some agents are compromised or attacked, utilizing redundancy and fault tolerance strategies.
|
65 |
+
|
66 |
+
#### How do individual agents within a swarm share insights without direct learning mechanisms like reinforcement learning?
|
67 |
+
|
68 |
+
In the context of pre-trained Large Language Models (LLMs) that operate within a swarm, sharing insights typically involves explicit communication and data exchange protocols rather than direct learning mechanisms like reinforcement learning. Here's how it can work:
|
69 |
+
|
70 |
+
- **Shared Databases and Knowledge Bases**: Agents can write to and read from a shared database or knowledge base where insights, generated content, and relevant data are stored. This allows agents to benefit from the collective experience of the swarm by accessing information that other agents have contributed.
|
71 |
+
|
72 |
+
- **APIs for Information Exchange**: Custom APIs can facilitate the exchange of information between agents. Through these APIs, agents can request specific information or insights from others within the swarm, effectively sharing knowledge without direct learning.
|
73 |
+
|
74 |
+
#### How do you balance the autonomy of individual LLMs with the need for coherent collective behavior in a swarm?
|
75 |
+
|
76 |
+
Balancing autonomy with collective coherence in a swarm of LLMs involves:
|
77 |
+
|
78 |
+
- **Central Coordination Mechanism**: Implementing a lightweight central coordination mechanism that can assign tasks, distribute information, and collect outputs from individual LLMs. This ensures that while each LLM operates autonomously, their actions are aligned with the swarm's overall objectives.
|
79 |
+
|
80 |
+
- **Standardized Communication Protocols**: Developing standardized protocols for how LLMs communicate and share information ensures that even though each agent works autonomously, the information exchange remains coherent and aligned with the collective goals.
|
81 |
+
|
82 |
+
#### How do LLM swarms adapt to changing environments or tasks without machine learning techniques?
|
83 |
+
|
84 |
+
Adaptation in LLM swarms, without relying on machine learning techniques for dynamic learning, can be achieved through:
|
85 |
+
|
86 |
+
- **Dynamic Task Allocation**: A central system or distributed algorithm can dynamically allocate tasks to different LLMs based on the changing environment or requirements. This ensures that the most suitable LLMs are addressing tasks for which they are best suited as conditions change.
|
87 |
+
|
88 |
+
- **Pre-trained Versatility**: Utilizing a diverse set of pre-trained LLMs with different specialties or training data allows the swarm to select the most appropriate agent for a task as the requirements evolve.
|
89 |
+
|
90 |
+
- **In Context Learning**: In context learning is another mechanism that can be employed within LLM swarms to adapt to changing environments or tasks. This approach involves leveraging the collective knowledge and experiences of the swarm to facilitate learning and improve performance. Here's how it can work:
|
91 |
+
|
92 |
+
|
93 |
+
#### Can LLM swarms operate in physical environments, or are they limited to digital spaces?
|
94 |
+
|
95 |
+
LLM swarms primarily operate in digital spaces, given their nature as software entities. However, they can interact with physical environments indirectly through interfaces with sensors, actuaries, or other devices connected to the Internet of Things (IoT). For example, LLMs can process data from physical sensors and control devices based on their outputs, enabling applications like smart home management or autonomous vehicle navigation.
|
96 |
+
|
97 |
+
#### Without direct learning from each other, how do agents in a swarm improve over time?
|
98 |
+
|
99 |
+
Improvement over time in a swarm of pre-trained LLMs, without direct learning from each other, can be achieved through:
|
100 |
+
|
101 |
+
- **Human Feedback**: Incorporating feedback from human operators or users can guide adjustments to the usage patterns or selection criteria of LLMs within the swarm, optimizing performance based on observed outcomes.
|
102 |
+
|
103 |
+
- **Periodic Re-training and Updating**: The individual LLMs can be periodically re-trained or updated by their developers based on collective insights and feedback from their deployment within swarms. While this does not involve direct learning from each encounter, it allows the LLMs to improve over time based on aggregated experiences.
|
104 |
+
|
105 |
+
These adjustments to the FAQ reflect the specific context of pre-trained LLMs operating within a swarm, focusing on communication, coordination, and adaptation mechanisms that align with their capabilities and constraints.
|
106 |
+
|
107 |
+
|
108 |
+
#### Conclusion
|
109 |
+
|
110 |
+
Swarms represent a powerful paradigm in AI, offering innovative solutions to complex, dynamic problems through collective intelligence and decentralized control. While challenges exist, particularly regarding cost and security, strategic design and management can leverage the strengths of swarm intelligence to achieve remarkable efficiency, adaptability, and robustness in a wide range of applications.
|
docs/corporate/flywheel.md
ADDED
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# The Swarms Flywheel
|
2 |
+
|
3 |
+
1. **Building a Supportive Community:** Initiate by establishing an engaging and inclusive open-source community for both developers and sales freelancers around Swarms. Regular online meetups, webinars, tutorials, and sales training can make them feel welcome and encourage contributions and sales efforts.
|
4 |
+
|
5 |
+
2. **Increased Contributions and Sales Efforts:** The more engaged the community, the more developers will contribute to Swarms and the more effort sales freelancers will put into selling Swarms.
|
6 |
+
|
7 |
+
3. **Improvement in Quality and Market Reach:** More developer contributions mean better quality, reliability, and feature offerings from Swarms. Simultaneously, increased sales efforts from freelancers boost Swarms' market penetration and visibility.
|
8 |
+
|
9 |
+
4. **Rise in User Base:** As Swarms becomes more robust and more well-known, the user base grows, driving more revenue.
|
10 |
+
|
11 |
+
5. **Greater Financial Incentives:** Increased revenue can be redirected to offer more significant financial incentives to both developers and salespeople. Developers can be incentivized based on their contribution to Swarms, and salespeople can be rewarded with higher commissions.
|
12 |
+
|
13 |
+
6. **Attract More Developers and Salespeople:** These financial incentives, coupled with the recognition and experience from participating in a successful project, attract more developers and salespeople to the community.
|
14 |
+
|
15 |
+
7. **Wider Adoption of Swarms:** An ever-improving product, a growing user base, and an increasing number of passionate salespeople accelerate the adoption of Swarms.
|
16 |
+
|
17 |
+
8. **Return to Step 1:** As the community, user base, and sales network continue to grow, the cycle repeats, each time speeding up the flywheel.
|
18 |
+
|
19 |
+
|
20 |
+
```markdown
|
21 |
+
+---------------------+
|
22 |
+
| Building a |
|
23 |
+
| Supportive | <--+
|
24 |
+
| Community | |
|
25 |
+
+--------+-----------+ |
|
26 |
+
| |
|
27 |
+
v |
|
28 |
+
+--------+-----------+ |
|
29 |
+
| Increased | |
|
30 |
+
| Contributions & | |
|
31 |
+
| Sales Efforts | |
|
32 |
+
+--------+-----------+ |
|
33 |
+
| |
|
34 |
+
v |
|
35 |
+
+--------+-----------+ |
|
36 |
+
| Improvement in | |
|
37 |
+
| Quality & Market | |
|
38 |
+
| Reach | |
|
39 |
+
+--------+-----------+ |
|
40 |
+
| |
|
41 |
+
v |
|
42 |
+
+--------+-----------+ |
|
43 |
+
| Rise in User | |
|
44 |
+
| Base | |
|
45 |
+
+--------+-----------+ |
|
46 |
+
| |
|
47 |
+
v |
|
48 |
+
+--------+-----------+ |
|
49 |
+
| Greater Financial | |
|
50 |
+
| Incentives | |
|
51 |
+
+--------+-----------+ |
|
52 |
+
| |
|
53 |
+
v |
|
54 |
+
+--------+-----------+ |
|
55 |
+
| Attract More | |
|
56 |
+
| Developers & | |
|
57 |
+
| Salespeople | |
|
58 |
+
+--------+-----------+ |
|
59 |
+
| |
|
60 |
+
v |
|
61 |
+
+--------+-----------+ |
|
62 |
+
| Wider Adoption of | |
|
63 |
+
| Swarms |----+
|
64 |
+
+---------------------+
|
65 |
+
```
|
66 |
+
|
67 |
+
|
68 |
+
# Potential Risks and Mitigations:
|
69 |
+
|
70 |
+
1. **Insufficient Contributions or Quality of Work**: Open-source efforts rely on individuals being willing and able to spend time contributing. If not enough people participate, or the work they produce is of poor quality, the product development could stall.
|
71 |
+
* **Mitigation**: Create a robust community with clear guidelines, support, and resources. Provide incentives for quality contributions, such as a reputation system, swag, or financial rewards. Conduct thorough code reviews to ensure the quality of contributions.
|
72 |
+
|
73 |
+
2. **Lack of Sales Results**: Commission-based salespeople will only continue to sell the product if they're successful. If they aren't making enough sales, they may lose motivation and cease their efforts.
|
74 |
+
* **Mitigation**: Provide adequate sales training and resources. Ensure the product-market fit is strong, and adjust messaging or sales tactics as necessary. Consider implementing a minimum commission or base pay to reduce risk for salespeople.
|
75 |
+
|
76 |
+
3. **Poor User Experience or User Adoption**: If users don't find the product useful or easy to use, they won't adopt it, and the user base won't grow. This could also discourage salespeople and contributors.
|
77 |
+
* **Mitigation**: Prioritize user experience in the product development process. Regularly gather and incorporate user feedback. Ensure robust user support is in place.
|
78 |
+
|
79 |
+
4. **Inadequate Financial Incentives**: If the financial rewards don't justify the time and effort contributors and salespeople are putting in, they will likely disengage.
|
80 |
+
* **Mitigation**: Regularly review and adjust financial incentives as needed. Ensure that the method for calculating and distributing rewards is transparent and fair.
|
81 |
+
|
82 |
+
5. **Security and Compliance Risks**: As the user base grows and the software becomes more complex, the risk of security issues increases. Moreover, as contributors from various regions join, compliance with various international laws could become an issue.
|
83 |
+
* **Mitigation**: Establish strong security practices from the start. Regularly conduct security audits. Seek legal counsel to understand and adhere to international laws and regulations.
|
84 |
+
|
85 |
+
## Activation Plan for the Flywheel:
|
86 |
+
|
87 |
+
1. **Community Building**: Begin by fostering a supportive community around Swarms. Encourage early adopters to contribute and provide feedback. Create comprehensive documentation, community guidelines, and a forum for discussion and support.
|
88 |
+
|
89 |
+
2. **Sales and Development Training**: Provide resources and training for salespeople and developers. Make sure they understand the product, its value, and how to effectively contribute or sell.
|
90 |
+
|
91 |
+
3. **Increase Contributions and Sales Efforts**: Encourage increased participation by highlighting successful contributions and sales, rewarding top contributors and salespeople, and regularly communicating about the project's progress and impact.
|
92 |
+
|
93 |
+
4. **Iterate and Improve**: Continually gather and implement feedback to improve Swarms and its market reach. The better the product and its alignment with the market, the more the user base will grow.
|
94 |
+
|
95 |
+
5. **Expand User Base**: As the product improves and sales efforts continue, the user base should grow. Ensure you have the infrastructure to support this growth and maintain a positive user experience.
|
96 |
+
|
97 |
+
6. **Increase Financial Incentives**: As the user base and product grow, so too should the financial incentives. Make sure rewards continue to be competitive and attractive.
|
98 |
+
|
99 |
+
7. **Attract More Contributors and Salespeople**: As the financial incentives and success of the product increase, this should attract more contributors and salespeople, further feeding the flywheel.
|
100 |
+
|
101 |
+
Throughout this process, it's important to regularly reassess and adjust your strategy as necessary. Stay flexible and responsive to changes in the market, user feedback, and the evolving needs of the community.
|
docs/corporate/front_end_contributors.md
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Frontend Contributor Guide
|
2 |
+
|
3 |
+
## Mission
|
4 |
+
At the heart of Swarms is the mission to democratize multi-agent technology, making it accessible to businesses of all sizes around the globe. This technology, which allows for the orchestration of multiple autonomous agents to achieve complex goals, has the potential to revolutionize industries by enhancing efficiency, scalability, and innovation. Swarms is committed to leading this charge by developing a platform that empowers businesses and individuals to harness the power of multi-agent systems without the need for specialized knowledge or resources.
|
5 |
+
|
6 |
+
|
7 |
+
## Understanding Your Impact as a Frontend Engineer
|
8 |
+
Crafting User Experiences: As a frontend engineer at Swarms, you play a crucial role in making multi-agent technology understandable and usable for businesses worldwide. Your work involves translating complex systems into intuitive interfaces, ensuring users can easily navigate, manage, and benefit from multi-agent solutions. By focusing on user-centric design and seamless integration, you help bridge the gap between advanced technology and practical business applications.
|
9 |
+
|
10 |
+
Skills and Attributes for Success: Successful frontend engineers at Swarms combine technical expertise with a passion for innovation and a deep understanding of user needs. Proficiency in modern frontend technologies, such as React, NextJS, and Tailwind, is just the beginning. You also need a strong grasp of usability principles, accessibility standards, and the ability to work collaboratively with cross-functional teams. Creativity, problem-solving skills, and a commitment to continuous learning are essential for developing solutions that meet diverse business needs.
|
11 |
+
|
12 |
+
|
13 |
+
## Joining the Team
|
14 |
+
As you contribute to Swarms, you become part of a collaborative effort to change the world. We value each contribution and provide constructive feedback to help you grow. Outstanding contributors who share our vision and demonstrate exceptional skill and dedication are invited to join our team, where they can have an even greater impact on our mission.
|
15 |
+
|
16 |
+
|
17 |
+
### Becoming a Full-Time Swarms Engineer:
|
18 |
+
Swarms is radically devoted to open source and transparency. To join the full time team, you must first contribute to the open source repository so we can assess your technical capability and general way of working. After a series of quality contributions, we'll offer you a full time position!
|
19 |
+
|
20 |
+
Joining Swarms full-time means more than just a job. It's an opportunity to be at the forefront of technological innovation, working alongside passionate professionals dedicated to making a difference. We look for individuals who are not only skilled but also driven by the desire to make multi-agent technology accessible and beneficial to businesses worldwide.
|
21 |
+
|
22 |
+
|
23 |
+
## Resources
|
24 |
+
- **Project Management Details**
|
25 |
+
- **Linear**: Our projects and tasks at a glance. Get a sense of our workflow and priorities.
|
26 |
+
- [View on Linear](https://linear.app/swarms/join/e7f4c6c560ffa0e1395820682f4e110a?s=1)
|
27 |
+
|
28 |
+
- **Design System and UI/UX Guidelines**
|
29 |
+
- **Figma**: Dive into our design system to grasp the aesthetics and user experience objectives of Swarms.
|
30 |
+
- [View on Figma](https://www.figma.com/file/KL4VIXfZKwwLgAes2WbGNa/Swarms-Cloud-Platform?type=design&node-id=0%3A1&mode=design&t=MkrM0mBQa6qsTDtJ-1)
|
31 |
+
|
32 |
+
- **Swarms Platform Repository**
|
33 |
+
- **GitHub**: The hub of our development activities. Familiarize yourself with our codebase and current projects.
|
34 |
+
- [Visit GitHub Repository](https://github.com/kyegomez/swarms-platform)
|
35 |
+
|
36 |
+
- **[Swarms Community](https://discord.gg/pSTSxqDk)**
|
37 |
+
|
38 |
+
|
39 |
+
### Design Style & User Experience
|
40 |
+
- [How to build great products with game design, not gamification](https://blog.superhuman.com/game-design-not-gamification/)
|