text
stringlengths 55
456k
| metadata
dict |
---|---|
🎉 cursor-tools 0.6.0-alpha is here! Your AI teammates just got even smarter. Let me show you what's cooking... 🧵
---
🕵️♂️ Your browser commands are now like having DevTools on autopilot! Console logs and network activity captured by default:
```bash
cursor-tools browser act "Add pizza to cart" --url=pizzaplace.com
# Now automagically shows you all console logs and network calls!
```
---
🔄 Browser sessions got that smooth jazz feel with improved page reuse:
```bash
cursor-tools browser act "Login" --url=myapp.com --connect-to=current
# Keeps your existing page state, no more starting fresh every time!
```
---
🤝 Introducing the ultimate AI power couple! Mix and match providers:
```bash
cursor-tools repo "Plan new feature" --fileProvider=openai --thinkingProvider=anthropic
# It's like having Batman AND Superman on your team
```
---
🤫 Sometimes you just want the answer without the chit-chat. New --quiet mode has got you covered:
```bash
cursor-tools web "what's 42 * 42?" --quiet
1764
# That's it. That's the tweet.
```
---
📹 Lights, Camera, Automation! Record your browser sessions:
```bash
cursor-tools browser act "Buy concert tickets" --url=ticketmaster.com --video=./recordings
# Never miss a beat in your automation choreography 🎵
```
---
🎯 Pro tip: Chain those browser actions like a boss:
```bash
cursor-tools browser act "Click Login | Type '[email protected]' into email | Smash that Submit button" --url=devsite.com
# One command to rule them all!
```
---
🚀 Ready to upgrade your AI team?
```bash
cursor-tools web "Should I try these new features?"
# Spoiler alert: Yes, yes you should!
```
Get it now at https://github.com/getcursor/cursor-tools
#AI #DevTools #OpenSource #CursorTools | {
"source": "eastlondoner/cursor-tools",
"title": "tweets.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/tweets.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 1738
} |
Querying perplexity using sonar-pro for: Angular vs React in 2024 with maxTokens: 8000
In 2024, both Angular and React remain popular choices for front-end development, each with its own strengths and use cases. Here's a comprehensive comparison of Angular vs React in 2024:
## Architecture and Purpose
**Angular:**
- Full-fledged structural framework based on TypeScript[1]
- Designed for developing dynamic web apps and complex enterprise-grade applications[1][3]
- Enforces a structured MVC (Model-View-Controller) architecture[2]
**React:**
- JavaScript-based library focused on building interactive UI components[1]
- Component-based architecture - flexible and modular[2]
- Ideal for creating reusable UI elements and interactive interfaces[1][3]
## Performance
**Angular:**
- High performance, but can slow down as the number of data bindings increases[2]
- Optimizes page loading through 'tree-shaking' technique[3]
- Uses real DOM, which can be slower for large-scale applications[1]
**React:**
- Generally faster due to its virtual DOM implementation[1][2]
- Efficient rendering and update process, especially for large applications[3]
- Smaller bundle size contributes to faster load times[1]
## Data Binding and State Management
**Angular:**
- Two-way data binding, simplifying model-view synchronization[1][3]
- Built-in state management solutions[3]
**React:**
- One-way data flow from parent to child components[2][3]
- Requires additional libraries like Redux for complex state management[1]
## Learning Curve and Development Experience
**Angular:**
- Steeper learning curve due to its comprehensive framework nature[1][4]
- TypeScript knowledge is recommended[2]
**React:**
- Easier to learn and quicker to get started with[1][4]
- More flexible, allowing developers to choose their preferred tools and libraries[3]
## Community and Ecosystem
**React:**
- Larger community with over 222k stars on GitHub (as of May 2024)[2]
- Extensive ecosystem of third-party libraries and tools[3]
**Angular:**
- Strong but smaller community compared to React[2]
- Comprehensive built-in solutions reducing the need for external libraries[3]
## Use Cases
**Angular:**
- Ideal for large-scale enterprise applications[1][3]
- Well-suited for complex projects requiring a rigorous structure[3]
- Good for developing progressive web apps and single-page applications[1]
**React:**
- Excellent for building interactive and dynamic user interfaces[1][3]
- Preferred for projects requiring high performance and quick loading[3]
- Suitable for both small and large-scale applications[1]
## Popularity and Job Market
- React continues to dominate in terms of developer usage and job opportunities[2][7]
- Angular remains a strong choice, especially in enterprise environments[7]
## Conclusion
In 2024, the choice between Angular and React depends on your project requirements, team expertise, and specific use case:
- Choose **Angular** if you need a full-fledged framework for large, complex enterprise applications with a structured architecture.
- Opt for **React** if you prioritize flexibility, performance, and want to build interactive UIs with a gentler learning curve.
Both frameworks continue to evolve and maintain their relevance in the web development landscape. The decision should be based on your project's specific needs, your team's skills, and the long-term goals of your application. | {
"source": "eastlondoner/cursor-tools",
"title": "web-compare.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/web-compare.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 3422
} |
Querying perplexity using sonar-pro for: cursor-tools installation guide with maxTokens: 8000
Here's a comprehensive guide on how to install and set up cursor-tools:
## Installation
To install cursor-tools, follow these steps:
1. Ensure you have Node.js 18 or later installed on your system[4].
2. Run the interactive setup command:
```bash
npx cursor-tools@latest install .
```
This command will[4]:
- Add `cursor-tools` as a dev dependency in your package.json
- Guide you through API key configuration
- Update your Cursor project rules for integration
## Requirements
Before installation, make sure you have the following[4]:
- Node.js 18 or later
- Perplexity API key
- Google Gemini API key
- For browser commands:
- Playwright (`npm install --global playwright`)
- OpenAI API key or Anthropic API key (for `act`, `extract`, and `observe` commands)
## API Key Configuration
During the installation process, you'll be prompted to enter your API keys[4]. These will be stored in a `.cursor-tools.env` file.
## Cursor Integration
cursor-tools automatically configures Cursor by updating your project rules during installation[4]. This provides:
- Command suggestions
- Usage examples
- Context-aware assistance
For new installations, it uses the `.cursor/rules/cursor-tools.mdc` file. For existing installations, it maintains compatibility with the legacy `.cursorrules` file[4].
## Cursor Agent Configuration
To get the full benefits of cursor-tools, use Cursor agent in "yolo mode"[4]. This allows the agent to execute commands directly.
## Usage
After installation, you can use cursor-tools in your projects. Here are some example commands:
1. Web search:
```bash
cursor-tools web "your query"
```
2. Repository search:
```bash
cursor-tools repo "your query"
```
3. Generate documentation:
```bash
cursor-tools doc "repository_url" "output_file.md"
```
4. GitHub integration:
```bash
cursor-tools github "fetch issue 123"
```
5. Browser automation:
```bash
cursor-tools browser open "https://example.com"
```
## Troubleshooting
If you encounter issues during installation or usage:
1. Ensure `cursor-tools` is properly installed and in your PATH[4].
2. Verify that the `.cursor-tools.env` file exists and contains valid API keys[4].
3. For GitHub operations, ensure your token has the required scopes (repo, read:user)[4].
4. Check your internet connection and API key permissions for model-related errors[4].
Remember, you can always run `cursor-tools install` again to reconfigure API keys or update your Cursor project rules[4].
By following this guide, you should be able to successfully install and set up cursor-tools for use with your Cursor AI editor. | {
"source": "eastlondoner/cursor-tools",
"title": "web-results.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/web-results.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 2698
} |
Querying perplexity using sonar-pro for: go to https://ui.shadcn.com/docs and extract detailed setup instructions for a new project, including installation steps, configuration, and initial setup. Focus on the getting started and installation sections. with maxTokens: 8000 | {
"source": "eastlondoner/cursor-tools",
"title": "local-research/shadcn-ui-setup.md",
"url": "https://github.com/eastlondoner/cursor-tools/blob/main/local-research/shadcn-ui-setup.md",
"date": "2025-01-13T15:03:33",
"stars": 2408,
"description": "Give Cursor Agent an AI Team and Advanced Skills",
"file_size": 273
} |
# Contribution to Ryujinx
You can contribute to Ryujinx with PRs, testing of PRs and issues. Contributing code and other implementations is greatly appreciated alongside simply filing issues for problems you encounter.
Please read the entire document before continuing as it can potentially save everyone involved a significant amount of time.
# Quick Links
* [Code Style Documentation](docs/coding-guidelines/coding-style.md)
* [Pull Request Guidelines](docs/workflow/pr-guide.md)
## Reporting Issues
We always welcome bug reports, feature proposals and overall feedback. Here are a few tips on how you can make reporting your issue as effective as possible.
### Identify Where to Report
The Ryujinx codebase is distributed across multiple repositories in the [Ryujinx organization](https://github.com/ryujinx-mirror). Depending on the feedback you might want to file the issue on a different repo. Here are a few common repos:
* [Ryujinx/Ryujinx](https://github.com/ryujinx-mirror/Ryujinx) Ryujinx core project files.
* [Ryujinx/Ryujinx-Games-List](https://github.com/ryujinx-mirror/Ryujinx-Games-List) Ryujinx game compatibility list.
* [Ryujinx/Ryujinx-Website](https://github.com/ryujinx-mirror/Ryujinx-Website) Ryujinx website source code.
* [Ryujinx/Ryujinx-Ldn-Website](https://github.com/ryujinx-mirror/Ryujinx-Ldn-Website) Ryujinx LDN website source code.
### Finding Existing Issues
Before filing a new issue, please search our [open issues](https://github.com/ryujinx-mirror/Ryujinx/issues) to check if it already exists.
If you do find an existing issue, please include your own feedback in the discussion. Do consider upvoting (👍 reaction) the original post, as this helps us prioritize popular issues in our backlog.
### Writing a Good Feature Request
Please review any feature requests already opened to both check it has not already been suggested, and to familiarize yourself with the format. When ready to submit a proposal, please use the [Feature Request issue template](https://github.com/ryujinx-mirror/Ryujinx/issues/new?assignees=&labels=&projects=&template=feature_request.yml&title=%5BFeature+Request%5D).
### Writing a Good Bug Report
Good bug reports make it easier for maintainers to verify and root cause the underlying problem. The better a bug report, the faster the problem will be resolved.
Ideally, a bug report should contain the following information:
* A high-level description of the problem.
* A _minimal reproduction_, i.e. the smallest time commitment/configuration required to reproduce the wrong behavior. This can be in the form of a small homebrew application, or by providing a save file and reproduction steps for a specific game.
* A description of the _expected behavior_, contrasted with the _actual behavior_ observed.
* Information on the environment: OS/distro, CPU, GPU (including driver), RAM etc.
* A Ryujinx log file of the run instance where the issue occurred. Log files can be found in `[Executable Folder]/Logs` and are named chronologically.
* Additional information, e.g. is it a regression from previous versions? Are there any known workarounds?
When ready to submit a bug report, please use the [Bug Report issue template](https://github.com/ryujinx-mirror/Ryujinx/issues/new?assignees=&labels=bug&projects=&template=bug_report.yml&title=%5BBug%5D).
## Contributing Changes
Project maintainers will merge changes that both improve the project and meet our standards for code quality.
The [Pull Request Guide](docs/workflow/pr-guide.md) and [License](https://github.com/ryujinx-mirror/Ryujinx/blob/master/LICENSE.txt) docs define additional guidance.
### DOs and DON'Ts
Please do:
* **DO** follow our [coding style](docs/coding-guidelines/coding-style.md) (C# code-specific).
* **DO** give priority to the current style of the project or file you're changing even if it diverges from the general guidelines.
* **DO** keep the discussions focused. When a new or related topic comes up
it's often better to create new issue than to side track the discussion.
* **DO** clearly state on an issue that you are going to take on implementing it.
* **DO** blog and tweet (or whatever) about your contributions, frequently!
Please do not:
* **DON'T** make PRs for style changes.
* **DON'T** surprise us with big pull requests. Instead, file an issue and talk with us on Discord to start
a discussion so we can agree on a direction before you invest a large amount
of time.
* **DON'T** commit code that you didn't write. If you find code that you think is a good fit to add to Ryujinx, file an issue or talk to us on Discord to start a discussion before proceeding.
* **DON'T** submit PRs that alter licensing related files or headers. If you believe there's a problem with them, file an issue and we'll be happy to discuss it.
### Suggested Workflow
We use and recommend the following workflow:
1. Create or find an issue for your work.
- You can skip this step for trivial changes.
- Get agreement from the team and the community that your proposed change is a good one if it is of significant size or changes core functionality.
- Clearly state that you are going to take on implementing it, if that's the case. You can request that the issue be assigned to you. Note: The issue filer and the implementer don't have to be the same person.
2. Create a personal fork of the repository on GitHub (if you don't already have one).
3. In your fork, create a branch off of main (`git checkout -b mybranch`).
- Branches are useful since they isolate your changes from incoming changes from upstream. They also enable you to create multiple PRs from the same fork.
4. Make and commit your changes to your branch.
- [Build Instructions](https://github.com/ryujinx-mirror/Ryujinx#building) explains how to build and test.
- Commit messages should be clear statements of action and intent.
6. Build the repository with your changes.
- Make sure that the builds are clean.
- Make sure that `dotnet format` has been run and any corrections tested and committed.
7. Create a pull request (PR) against the Ryujinx/Ryujinx repository's **main** branch.
- State in the description what issue or improvement your change is addressing.
- Check if all the Continuous Integration checks are passing. Refer to [Actions](https://github.com/ryujinx-mirror/Ryujinx/actions) to check for outstanding errors.
8. Wait for feedback or approval of your changes from the [core development team](https://github.com/orgs/Ryujinx/teams/developers)
- Details about the pull request [review procedure](docs/workflow/ci/pr-guide.md).
9. When the team members have signed off, and all checks are green, your PR will be merged.
- The next official build will automatically include your change.
- You can delete the branch you used for making the change.
### Good First Issues
The team marks the most straightforward issues as [good first issues](https://github.com/ryujinx-mirror/Ryujinx/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). This set of issues is the place to start if you are interested in contributing but new to the codebase.
### Commit Messages
Please format commit messages as follows (based on [A Note About Git Commit Messages](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)):
```
Summarize change in 50 characters or less
Provide more detail after the first line. Leave one blank line below the
summary and wrap all lines at 72 characters or less.
If the change fixes an issue, leave another blank line after the final
paragraph and indicate which issue is fixed in the specific format
below.
Fix #42
```
Also do your best to factor commits appropriately, not too large with unrelated things in the same commit, and not too small with the same small change applied N times in N different commits.
### PR - CI Process
The [Ryujinx continuous integration](https://github.com/ryujinx-mirror/Ryujinx/actions) (CI) system will automatically perform the required builds and run tests (including the ones you are expected to run) for PRs. Builds and test runs must be clean or have bugs properly filed against flaky/unexpected failures that are unrelated to your change.
If the CI build fails for any reason, the PR actions tab should be consulted for further information on the failure. There are a few usual suspects for such a failure:
* `dotnet format` has not been run on the PR and has outstanding stylistic issues.
* There is an error within the PR that fails a test or errors the compiler.
* Random failure of the workflow can occasionally result in a CI failure. In this scenario a maintainer will manually restart the job.
### PR Feedback
Ryujinx team and community members will provide feedback on your change. Community feedback is highly valued. You may see the absence of team feedback if the community has already provided good review feedback.
Two Ryujinx team members must review and approve every PR prior to merge. They will often reply with "LGTM, see nit". That means that the PR will be merged once the feedback is resolved. "LGTM" == "looks good to me".
There are lots of thoughts and [approaches](https://github.com/antlr/antlr4-cpp/blob/master/CONTRIBUTING.md#emoji) for how to efficiently discuss changes. It is best to be clear and explicit with your feedback. Please be patient with people who might not understand the finer details about your approach to feedback.
#### Copying Changes from Other Projects
Ryujinx uses some implementations and frameworks from other projects. The following rules must be followed for PRs that include changes from another project:
- The license of the file is [permissive](https://en.wikipedia.org/wiki/Permissive_free_software_licence).
- The license of the file is left in-tact.
- The contribution is correctly attributed in the [3rd party notices](https://github.com/ryujinx-mirror/Ryujinx/blob/master/distribution/legal/THIRDPARTY.md) file in the repository, as needed. | {
"source": "ryujinx-mirror/ryujinx",
"title": "CONTRIBUTING.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/CONTRIBUTING.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 10011
} |
[links/discord]: https://discord.gg/xmHPGDfVCa
[badges/discord]: https://img.shields.io/discord/1291765437100720243?label=ryujinx-mirror&logo=discord&logoColor=FFFFFF&color=5865F3
As of now, the [ryujinx-mirror/ryujinx](https://github.com/ryujinx-mirror/ryujinx) repository serves as a downstream hard fork of the original Ryujinx project. You can download nightly binaries for Windows, macOS, and Linux (including `AppImage`s) from the [latest release](https://github.com/ryujinx-mirror/ryujinx/releases/latest).
> [!NOTE]
> This fork is not affiliated with the **original** Ryujinx project, or Nintendo whatsoever.
### Current Goals
If you would like a version with more new features & improvements, feel free to check out [GreemDev's fork](https://github.com/GreemDev/Ryujinx). We aim to keep this repository more focused on small fixes and infrastructure reconstruction, staying more true to the original Ryujinx project.
* ☑️ Reconstruct basic build infrastructure & workflows for this repository, based on revision hashes as opposed to semver releases (for now)
* ☑️ To be as safe as possible, remove all previous in-app and meta references to Patreon, `ryujinx.org` etc while keeping full attribution of original authors and contributors in-tact.
* Keep 'branding' as pure and faithful to the original project as possible.
### Join Discussion
Feel free to join the [ryujinx-mirror Discord community][links/discord] to join in on the development of this fork going forward.<br>
See `#ryujinx-info` for more information.
[![ryujinx-mirror Discord][badges/discord]][links/discord]
___
<h1 align="center">
<br>
<a href="https://github.com/ryujinx-mirror/ryujinx"><img src="distribution/misc/Logo.svg" alt="Ryujinx" width="150"></a>
<br>
<b>Ryujinx</b>
<br>
<sub><sup><b>(REE-YOU-JINX)</b></sup></sub>
<br>
</h1>
<p align="center">
Ryujinx is an open-source Nintendo Switch emulator, created by gdkchan, written in C#.
This emulator aims at providing excellent accuracy and performance, a user-friendly interface and consistent builds.
It was written from scratch and development on the project began in September 2017.
Ryujinx is available on Github under the <a href="LICENSE.txt" target="_blank">MIT license</a>.
<br />
</p>
## Compatibility
As of May 2024, Ryujinx has been tested on approximately 4,300 titles;
over 4,100 boot past menus and into gameplay, with roughly 3,550 of those being considered playable.
You can check out the compatibility list [here](https://github.com/ryujinx-mirror/Ryujinx-Games-List/issues).
Anyone is free to submit a new game test or update an existing game test entry;
simply follow the new issue template and testing guidelines, or post as a reply to the applicable game issue.
Use the search function to see if a game has been tested already!
## Usage
To run this emulator, your PC must be equipped with at least 8GiB of RAM;
failing to meet this requirement may result in a poor gameplay experience or unexpected crashes.
<!--
See our [Setup & Configuration Guide](https://github.com/ryujinx-mirror/Ryujinx/wiki/Ryujinx-Setup-&-Configuration-Guide) on how to set up the emulator.
For our Local Wireless (LDN) builds, see our [Multiplayer: Local Play/Local Wireless Guide](https://github.com/ryujinx-mirror/Ryujinx/wiki/Multiplayer-(LDN-Local-Wireless)-Guide).
-->
<!--Avalonia UI comes with translations for various languages. See [Crowdin](https://crwd.in/ryujinx) for more information.-->
## Latest build
These builds are compiled automatically for each commit on the master branch.
While we strive to ensure optimal stability and performance prior to pushing an update, our automated builds **may be unstable or completely broken**.
See [the releases page](https://github.com/ryujinx-mirror/ryujinx/releases) for automatic builds for Windows, macOS, and Linux.
<!--
If you want to see details on updates to the emulator, you can visit our [Changelog](https://github.com/ryujinx-mirror/Ryujinx/wiki/Changelog).
The latest automatic build for Windows, macOS, and Linux can be found on the [Official Website](https://ryujinx.org/download).
-->
## Documentation
If you are planning to contribute or just want to learn more about this project please read through our [documentation](docs/README.md).
## Building
If you wish to build the emulator yourself, follow these steps:
### Step 1
Install the [.NET 8.0 (or higher) SDK](https://dotnet.microsoft.com/download/dotnet/8.0).
Make sure your SDK version is higher or equal to the required version specified in [global.json](global.json).
### Step 2
Either use `git clone https://github.com/ryujinx-mirror/ryujinx` on the command line to clone the repository or use Code --> Download zip button to get the files.
### Step 3
To build Ryujinx, open a command prompt inside the project directory.
You can quickly access it on Windows by holding shift in File Explorer, then right clicking and selecting `Open command window here`.
Then type the following command: `dotnet build -c Release -o build`
the built files will be found in the newly created build directory.
Ryujinx system files are stored in the `Ryujinx` folder.
This folder is located in the user folder, which can be accessed by clicking `Open Ryujinx Folder` under the File menu in the GUI.
## Features
- **Audio**
Audio output is entirely supported, audio input (microphone) isn't supported.
We use C# wrappers for [OpenAL](https://openal-soft.org/), and [SDL2](https://www.libsdl.org/) & [libsoundio](http://libsound.io/) as fallbacks.
- **CPU**
The CPU emulator, ARMeilleure, emulates an ARMv8 CPU and currently has support for most 64-bit ARMv8 and some of the ARMv7 (and older) instructions, including partial 32-bit support.
It translates the ARM code to a custom IR, performs a few optimizations, and turns that into x86 code.
There are three memory manager options available depending on the user's preference, leveraging both software-based (slower) and host-mapped modes (much faster).
The fastest option (host, unchecked) is set by default.
Ryujinx also features an optional Profiled Persistent Translation Cache, which essentially caches translated functions so that they do not need to be translated every time the game loads.
The net result is a significant reduction in load times (the amount of time between launching a game and arriving at the title screen) for nearly every game.
NOTE: This feature is enabled by default in the Options menu > System tab.
You must launch the game at least twice to the title screen or beyond before performance improvements are unlocked on the third launch!
These improvements are permanent and do not require any extra launches going forward.
- **GPU**
The GPU emulator emulates the Switch's Maxwell GPU using either the OpenGL (version 4.5 minimum), Vulkan, or Metal (via MoltenVK) APIs through a custom build of OpenTK or Silk.NET respectively.
There are currently six graphics enhancements available to the end user in Ryujinx: Disk Shader Caching, Resolution Scaling, Anti-Aliasing, Scaling Filters (including FSR), Anisotropic Filtering and Aspect Ratio Adjustment.
These enhancements can be adjusted or toggled as desired in the GUI.
- **Input**
We currently have support for keyboard, mouse, touch input, JoyCon input support, and nearly all controllers.
Motion controls are natively supported in most cases; for dual-JoyCon motion support, DS4Windows or BetterJoy are currently required.
In all scenarios, you can set up everything inside the input configuration menu.
- **DLC & Modifications**
Ryujinx is able to manage add-on content/downloadable content through the GUI.
Mods (romfs, exefs, and runtime mods such as cheats) are also supported;
the GUI contains a shortcut to open the respective mods folder for a particular game.
- **Configuration**
The emulator has settings for enabling or disabling some logging, remapping controllers, and more.
You can configure all of them through the graphical interface or manually through the config file, `Config.json`, found in the user folder which can be accessed by clicking `Open Ryujinx Folder` under the File menu in the GUI.
<!--
## Contact
If you have contributions, suggestions, need emulator support or just want to get in touch with the team, join our [Discord server](https://discord.com/invite/Ryujinx).
You may also review our [FAQ](https://github.com/ryujinx-mirror/Ryujinx/wiki/Frequently-Asked-Questions).
-->
## License
This software is licensed under the terms of the [MIT license](LICENSE.txt).
This project makes use of code authored by the libvpx project, licensed under BSD and the ffmpeg project, licensed under LGPLv3.
See [LICENSE.txt](LICENSE.txt) and [THIRDPARTY.md](distribution/legal/THIRDPARTY.md) for more details.
## Credits
- [LibHac](https://github.com/Thealexbarney/LibHac) is used for our file-system.
- [AmiiboAPI](https://www.amiiboapi.com) is used in our Amiibo emulation.
- [ldn_mitm](https://github.com/spacemeowx2/ldn_mitm) is used for one of our available multiplayer modes.
- [ShellLink](https://github.com/securifybv/ShellLink) is used for Windows shortcut generation. | {
"source": "ryujinx-mirror/ryujinx",
"title": "README.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/README.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 9198
} |
# Documents Index
This repo includes several documents that explain both high-level and low-level concepts about Ryujinx and its functions. These are very useful for contributors, to get context that can be very difficult to acquire from just reading code.
Intro to Ryujinx
==================
Ryujinx is an open-source Nintendo Switch emulator, created by gdkchan, written in C#.
* The CPU emulator, ARMeilleure, emulates an ARMv8 CPU and currently has support for most 64-bit ARMv8 and some of the ARMv7 (and older) instructions.
* The GPU emulator emulates the Switch's Maxwell GPU using either the OpenGL (version 4.5 minimum), Vulkan, or Metal (via MoltenVK) APIs through a custom build of OpenTK or Silk.NET respectively.
* Audio output is entirely supported via C# wrappers for SDL2, with OpenAL & libsoundio as fallbacks.
Getting Started
===============
- [Installing the .NET SDK](https://dotnet.microsoft.com/download)
- [Official .NET Docs](https://docs.microsoft.com/dotnet/core/)
Contributing (Building, testing, benchmarking, profiling, etc.)
===============
If you want to contribute a code change to this repo, start here.
- [Contributor Guide](../CONTRIBUTING.md)
Coding Guidelines
=================
- [C# coding style](coding-guidelines/coding-style.md)
- [Service Implementation Guidelines - WIP](https://gist.github.com/gdkchan/84ba88cd50efbe58d1babfaa7cd7c455)
Project Docs
=================
To be added. Many project files will contain basic XML docs for key functions and classes in the meantime. | {
"source": "ryujinx-mirror/ryujinx",
"title": "docs/README.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/docs/README.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 1531
} |
# ffmpeg (LGPLv3)
<details>
<summary>See License</summary>
```
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
```
</details>
# libvpx (BSD)
<details>
<summary>See License</summary>
```
Copyright (c) 2010, The WebM Project authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name of Google, nor the WebM Project, nor the names
of its contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
```
</details>
# Atmosphère (MIT)
<details>
<summary>See License</summary>
```
MIT License
Copyright (c) 2018-2020 Atmosphère-NX
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
</details>
# OpenAL Soft (LGPLv2)
<details>
<summary>See License</summary>
```
GNU LIBRARY GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the library GPL. It is
numbered 2 because it goes with version 2 of the ordinary GPL.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Library General Public License, applies to some
specially designated Free Software Foundation software, and to any
other libraries whose authors decide to use it. You can use it for
your libraries, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if
you distribute copies of the library, or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link a program with the library, you must provide
complete object files to the recipients so that they can relink them
with the library, after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
Our method of protecting your rights has two steps: (1) copyright
the library, and (2) offer you this license which gives you legal
permission to copy, distribute and/or modify the library.
Also, for each distributor's protection, we want to make certain
that everyone understands that there is no warranty for this free
library. If the library is modified by someone else and passed on, we
want its recipients to know that what they have is not the original
version, so that any problems introduced by others will not reflect on
the original authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that companies distributing free
software will individually obtain patent licenses, thus in effect
transforming the program into proprietary software. To prevent this,
we have made it clear that any patent must be licensed for everyone's
free use or not licensed at all.
Most GNU software, including some libraries, is covered by the ordinary
GNU General Public License, which was designed for utility programs. This
license, the GNU Library General Public License, applies to certain
designated libraries. This license is quite different from the ordinary
one; be sure to read it in full, and don't assume that anything in it is
the same as in the ordinary license.
The reason we have a separate public license for some libraries is that
they blur the distinction we usually make between modifying or adding to a
program and simply using it. Linking a program with a library, without
changing the library, is in some sense simply using the library, and is
analogous to running a utility program or application program. However, in
a textual and legal sense, the linked executable is a combined work, a
derivative of the original library, and the ordinary General Public License
treats it as such.
Because of this blurred distinction, using the ordinary General
Public License for libraries did not effectively promote software
sharing, because most developers did not use the libraries. We
concluded that weaker conditions might promote sharing better.
However, unrestricted linking of non-free programs would deprive the
users of those programs of all benefit from the free status of the
libraries themselves. This Library General Public License is intended to
permit developers of non-free programs to use free libraries, while
preserving your freedom as a user of such programs to change the free
libraries that are incorporated in them. (We have not seen how to achieve
this as regards changes in header files, but we have achieved it as regards
changes in the actual functions of the Library.) The hope is that this
will lead to faster development of free libraries.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, while the latter only
works together with the library.
Note that it is possible for a library to be covered by the ordinary
General Public License rather than by this special one.
GNU LIBRARY GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library which
contains a notice placed by the copyright holder or other authorized
party saying it may be distributed under the terms of this Library
General Public License (also called "this License"). Each licensee is
addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also compile or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
c) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
d) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the source code distributed need not include anything that is normally
distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Library General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
```
</details>
# ShellLink (MIT)
<details>
<summary>See License</summary>
```
MIT License
Copyright (c) 2017 Yorick Koster, Securify B.V.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
</details> | {
"source": "ryujinx-mirror/ryujinx",
"title": "distribution/legal/THIRDPARTY.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/distribution/legal/THIRDPARTY.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 36222
} |
# C# Coding Style
The general rule we follow is "use Visual Studio defaults".
Using an IDE that supports the `.editorconfig` standard will make this much simpler.
1. We use [Allman style](http://en.wikipedia.org/wiki/Indent_style#Allman_style) braces, where each brace begins on a new line. A single line statement block can go without braces but the block must be properly indented on its own line and must not be nested in other statement blocks that use braces (See rule 18 for more details). One exception is that a `using` statement is permitted to be nested within another `using` statement by starting on the following line at the same indentation level, even if the nested `using` contains a controlled block.
2. We use four spaces of indentation (no tabs).
3. We use `_camelCase` for internal and private fields and use `readonly` where possible. Prefix internal and private instance fields with `_`, thread static fields with `t_`. When used on static fields, `readonly` should come after `static` (e.g. `static readonly` not `readonly static`). Public fields should be used sparingly and should use PascalCasing with no prefix when used.
4. We avoid `this.` unless absolutely necessary.
5. We always specify the visibility, even if it's the default (e.g.
`private string _foo` not `string _foo`). Visibility should be the first modifier (e.g.
`public abstract` not `abstract public`).
6. Namespace imports should be specified at the top of the file, *outside* of `namespace` declarations.
7. Avoid more than one empty line at any time. For example, do not have two
blank lines between members of a type.
8. Avoid spurious free spaces.
For example avoid `if (someVar == 0)...`, where the dots mark the spurious free spaces.
Consider enabling "View White Space (Ctrl+R, Ctrl+W)" or "Edit -> Advanced -> View White Space" if using Visual Studio to aid detection.
9. If a file happens to differ in style from these guidelines (e.g. private members are named `m_member`
rather than `_member`), the existing style in that file takes precedence.
10. We only use `var` when the type is explicitly named on the right-hand side, typically due to either `new` or an explicit cast, e.g. `var stream = new FileStream(...)` not `var stream = OpenStandardInput()`.
- Similarly, target-typed `new()` can only be used when the type is explicitly named on the left-hand side, in a variable definition statement or a field definition statement. e.g. `FileStream stream = new(...);`, but not `stream = new(...);` (where the type was specified on a previous line).
11. We use language keywords instead of BCL types (e.g. `int, string, float` instead of `Int32, String, Single`, etc) for both type references as well as method calls (e.g. `int.Parse` instead of `Int32.Parse`). See issue [#13976](https://github.com/dotnet/runtime/issues/13976) for examples.
12. We use PascalCasing to name all our constant local variables and fields. The only exception is for interop code where the constant value should exactly match the name and value of the code you are calling via interop.
13. We use PascalCasing for all method names, including local functions.
14. We use ```nameof(...)``` instead of ```"..."``` whenever possible and relevant.
15. Fields should be specified at the top within type declarations.
16. When including non-ASCII characters in the source code use Unicode escape sequences (\uXXXX) instead of literal characters. Literal non-ASCII characters occasionally get garbled by a tool or editor.
17. When using labels (for goto), indent the label one less than the current indentation.
18. When using a single-statement if, we follow these conventions:
- Never use single-line form (for example: `if (source == null) throw new ArgumentNullException("source");`)
- Using braces is always accepted, and required if any block of an `if`/`else if`/.../`else` compound statement uses braces or if a single statement body spans multiple lines.
- Braces may be omitted only if the body of *every* block associated with an `if`/`else if`/.../`else` compound statement is placed on a single line.
19. Make all internal and private types static or sealed unless derivation from them is required. As with any implementation detail, they can be changed if/when derivation is required in the future.
20. XML docs should be used when writing interfaces or when a class/method is deemed sufficient in scope or complexity.
21. So-called [Magic Numbers](https://en.wikipedia.org/wiki/Magic_number_(programming)) should be defined as named constants before use (for example `for (int i = 56; i < 68; i++)` could read `for (int i = _currentAge; i < _retireAge; i++)`).
This may be ignored for trivial or syntactically common statements.
An [EditorConfig](https://editorconfig.org "EditorConfig homepage") file (`.editorconfig`) has been provided at the root of the runtime repository, enabling C# auto-formatting conforming to the above guidelines.
### Example File:
``ShaderCache.cs:``
```C#
using Ryujinx.Common.Configuration;
using Ryujinx.Common.Logging;
using Ryujinx.Graphics.GAL;
using Ryujinx.Graphics.Gpu.Engine.Threed;
using Ryujinx.Graphics.Gpu.Engine.Types;
using Ryujinx.Graphics.Gpu.Image;
using Ryujinx.Graphics.Gpu.Memory;
using Ryujinx.Graphics.Gpu.Shader.DiskCache;
using Ryujinx.Graphics.Shader;
using Ryujinx.Graphics.Shader.Translation;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
namespace Ryujinx.Graphics.Gpu.Shader
{
/// <summary>
/// Memory cache of shader code.
/// </summary>
class ShaderCache : IDisposable
{
/// <summary>
/// Default flags used on the shader translation process.
/// </summary>
public const TranslationFlags DefaultFlags = TranslationFlags.DebugMode;
private readonly struct TranslatedShader
{
public readonly CachedShaderStage Shader;
public readonly ShaderProgram Program;
public TranslatedShader(CachedShaderStage shader, ShaderProgram program)
{
Shader = shader;
Program = program;
}
}
...
/// <summary>
/// Processes the queue of shaders that must save their binaries to the disk cache.
/// </summary>
public void ProcessShaderCacheQueue()
{
// Check to see if the binaries for previously compiled shaders are ready, and save them out.
while (_programsToSaveQueue.TryPeek(out ProgramToSave programToSave))
{
ProgramLinkStatus result = programToSave.HostProgram.CheckProgramLink(false);
if (result != ProgramLinkStatus.Incomplete)
{
if (result == ProgramLinkStatus.Success)
{
_cacheWriter.AddShader(programToSave.CachedProgram, programToSave.BinaryCode ?? programToSave.HostProgram.GetBinary());
}
_programsToSaveQueue.Dequeue();
}
else
{
break;
}
}
}
}
}
```
For other languages, our current best guidance is consistency. When editing files, keep new code and changes consistent with the style in the files. For new files, it should conform to the style for that component. If there is a completely new component, anything that is reasonably broadly accepted is fine. | {
"source": "ryujinx-mirror/ryujinx",
"title": "docs/coding-guidelines/coding-style.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/docs/coding-guidelines/coding-style.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 7515
} |
# Pull Request Guide
## Contributing Rules
All contributions to Ryujinx/Ryujinx repository are made via pull requests (PRs) rather than through direct commits. The pull requests are reviewed and merged by the maintainers after a review and at least two approvals from the core development team.
To merge pull requests, you must have write permissions in the repository.
## Quick Code Review Rules
* Do not mix unrelated changes in one pull request. For example, a code style change should never be mixed with a bug fix.
* All changes should follow the existing code style. You can read more about our code style at [docs/coding-guidelines](../coding-guidelines/coding-style.md).
* Adding external dependencies is to be avoided unless not doing so would introduce _significant_ complexity. Any dependency addition should be justified and discussed before merge.
* Use Draft pull requests for changes you are still working on but want early CI loop feedback. When you think your changes are ready for review, [change the status](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/changing-the-stage-of-a-pull-request) of your pull request.
* Rebase your changes when required or directly requested. Changes should always be commited on top of the upstream branch, not the other way around.
* If you are asked to make changes during the review process do them as a new commit.
* Only resolve GitHub conversations with reviewers once they have been addressed with a commit, or via a mutual agreement.
## Pull Request Ownership
Every pull request will have automatically have labels and reviewers assigned. The label not only indicates the code segment which the change touches but also the area reviewers to be assigned.
If during the code review process a merge conflict occurs, the PR author is responsible for its resolution. Help will be provided if necessary although GitHub makes this easier by allowing simple conflict resolution using the [conflict-editor](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/resolving-a-merge-conflict-on-github).
## Pull Request Builds
When submitting a PR to the `Ryujinx/Ryujinx` repository, various builds will run validating many areas to ensure we keep developer productivity and product quality high. These various workflows can be tracked in the [Actions](https://github.com/ryujinx-mirror/Ryujinx/actions) tab of the repository. If the job continues to completion, the build artifacts will be uploaded and posted as a comment in the PR discussion.
## Review Turnaround Times
Ryujinx is a project that is maintained by volunteers on a completely free-time basis. As such we cannot guarantee any particular timeframe for pull request review and approval. Weeks to months are common for larger (>500 line) PRs but there are some additional best practises to avoid review purgatory.
* Make the reviewers life easier wherever possible. Make use of descriptive commit names, code comments and XML docs where applicable.
* If there is disagreement on feedback then always lean on the side of the development team and community over any personal opinion.
* We're human. We miss things. We forget things. If there has been radio silence on your changes for a substantial period of time then do not hesitate to reach out directly either with something simple like "bump" on GitHub or a directly on Discord.
To re-iterate, make the review as easy for us as possible, respond promptly and be comfortable to interact directly with us for anything else.
## Merging Pull Requests
Anyone with write access can merge a pull request manually when the following conditions have been met:
* The PR has been approved by two reviewers and any other objections are addressed.
* You can request follow up reviews from the original reviewers if they requested changes.
* The PR successfully builds and passes all tests in the Continuous Integration (CI) system. In case of failures, refer to the [Actions](https://github.com/ryujinx-mirror/Ryujinx/actions) tab of your PR.
Typically, PRs are merged as one commit (squash merges). It creates a simpler history than a Merge Commit. "Special circumstances" are rare, and typically mean that there are a series of cleanly separated changes that will be too hard to understand if squashed together, or for some reason we want to preserve the ability to dissect them.
## Blocking Pull Request Merging
If for whatever reason you would like to move your pull request back to an in-progress status to avoid merging it in the current form, you can turn the PR into a draft PR by selecting the option under the reviewers section. Alternatively, you can do that by adding [WIP] prefix to the pull request title.
## Old Pull Request Policy
From time to time we will review older PRs and check them for relevance. If we find the PR is inactive or no longer applies, we will close it. As the PR owner, you can simply reopen it if you feel your closed PR needs our attention. | {
"source": "ryujinx-mirror/ryujinx",
"title": "docs/workflow/pr-guide.md",
"url": "https://github.com/ryujinx-mirror/ryujinx/blob/mirror/master/docs/workflow/pr-guide.md",
"date": "2024-10-01T19:48:13",
"stars": 2387,
"description": "Hard-fork of the Ryujinx project",
"file_size": 5010
} |
# Retrieval-Augmented Generation (RAG) Project
**_Think it. Build it. bRAG it._ 🚀 bRAGAI's coming soon (🤫)**
**[Join the waitlist](https://bragai.dev/)** for exclusive early access, be among the first to try your AI-powered full-stack development assistant, and transform ideas into production-ready web apps in minutes.
---------------------
This repository contains a comprehensive exploration of Retrieval-Augmented Generation (RAG) for various applications.
Each notebook provides a detailed, hands-on guide to setting up and experimenting with RAG from an introductory level to advanced implementations, including multi-querying and custom RAG builds.

## Project Structure
If you want to jump straight into it, check out the file `full_basic_rag.ipynb` -> this file will give you a boilerplate starter code of a fully customizable RAG chatbot.
Make sure to run your files in a virtual environment (checkout section `Get Started`)
The following notebooks can be found under the directory `notebooks/`.
### [1]\_rag_setup_overview.ipynb
This introductory notebook provides an overview of RAG architecture and its foundational setup.
The notebook walks through:
- **Environment Setup**: Configuring the environment, installing necessary libraries, and API setups.
- **Initial Data Loading**: Basic document loaders and data preprocessing methods.
- **Embedding Generation**: Generating embeddings using various models, including OpenAI's embeddings.
- **Vector Store**: Setting up a vector store (ChromaDB/Pinecone) for efficient similarity search.
- **Basic RAG Pipeline**: Creating a simple retrieval and generation pipeline to serve as a baseline.
### [2]\_rag_with_multi_query.ipynb
Building on the basics, this notebook introduces multi-querying techniques in the RAG pipeline, exploring:
- **Multi-Query Setup**: Configuring multiple queries to diversify retrieval.
- **Advanced Embedding Techniques**: Utilizing multiple embedding models to refine retrieval.
- **Pipeline with Multi-Querying**: Implementing multi-query handling to improve relevance in response generation.
- **Comparison & Analysis**: Comparing results with single-query pipelines and analyzing performance improvements.
### [3]_rag_routing_and_query_construction.ipynb
This notebook delves deeper into customizing a RAG pipeline.
It covers:
- **Logical Routing:** Implements function-based routing for classifying user queries to appropriate data sources based on programming languages.
- **Semantic Routing:** Uses embeddings and cosine similarity to direct questions to either a math or physics prompt, optimizing response accuracy.
- **Query Structuring for Metadata Filters:** Defines structured search schema for YouTube tutorial metadata, enabling advanced filtering (e.g., by view count, publication date).
- **Structured Search Prompting:** Leverages LLM prompts to generate database queries for retrieving relevant content based on user input.
- **Integration with Vector Stores:** Links structured queries to vector stores for efficient data retrieval.
### [4]_rag_indexing_and_advanced_retrieval.ipynb
Continuing from the previous customization, this notebook explores:
- **Preface on Document Chunking:** Points to external resources for document chunking techniques.
- **Multi-representation Indexing:** Sets up a multi-vector indexing structure for handling documents with different embeddings and representations.
- **In-Memory Storage for Summaries:** Uses InMemoryByteStore for storing document summaries alongside parent documents, enabling efficient retrieval.
- **MultiVectorRetriever Setup:** Integrates multiple vector representations to retrieve relevant documents based on user queries.
- **RAPTOR Implementation:** Explores RAPTOR, an advanced indexing and retrieval model, linking to in-depth resources.
- **ColBERT Integration:** Demonstrates ColBERT-based token-level vector indexing and retrieval, which captures contextual meaning at a fine-grained level.
- **Wikipedia Example with ColBERT:** Retrieves information about Hayao Miyazaki using the ColBERT retrieval model for demonstration.
### [5]_rag_retrieval_and_reranking.ipynb
This final notebook brings together the RAG system components, with a focus on scalability and optimization:
- **Document Loading and Splitting:** Loads and chunks documents for indexing, preparing them for vector storage.
- **Multi-query Generation with RAG-Fusion:** Uses a prompt-based approach to generate multiple search queries from a single input question.
- **Reciprocal Rank Fusion (RRF):** Implements RRF for re-ranking multiple retrieval lists, merging results for improved relevance.
- **Retriever and RAG Chain Setup:** Constructs a retrieval chain for answering queries, using fused rankings and RAG chains to pull contextually relevant information.
- **Cohere Re-Ranking:** Demonstrates re-ranking with Cohere’s model for additional contextual compression and refinement.
- **CRAG and Self-RAG Retrieval:** Explores advanced retrieval approaches like CRAG and Self-RAG, with links to examples.
- **Exploration of Long-Context Impact:** Links to resources explaining the impact of long-context retrieval on RAG models.
## Getting Started
### Pre-requisites
Ensure **Python 3.11.11** (preferred) is installed on your system. Follow the platform-specific instructions below to install it if not already installed.
#### macOS
1. Install [Homebrew](https://brew.sh/) if not already installed:
```bash
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
2. Install Python 3.11.11:
```bash
brew install [email protected]
```
3. Verify installation:
```bash
python3.11 --version
```
#### Linux
1. Update your package manager:
```bash
sudo apt update
```
2. Install Python 3.11.11:
```bash
sudo apt install python3.11 python3.11-venv
```
3. Verify installation:
```bash
python3.11 --version
```
#### Windows
1. Download the Python 3.11.11 installer from [Python.org](https://www.python.org/downloads/).
2. Run the installer and ensure you check the box **"Add Python to PATH"**.
3. Verify installation:
```cmd
python --version
```
---
### Installation Instructions
#### 1. Clone the Repository
```bash
git clone https://github.com/bRAGAI/bRAG-langchain.git
cd bRAG-langchain
```
#### 2. Create a Virtual Environment
Use Python 3.11.11 to create a virtual environment:
```bash
python3.11 -m venv venv
```
Activate the virtual environment:
- **macOS/Linux**:
```bash
source venv/bin/activate
```
- **Windows**:
```cmd
venv\Scripts\activate
```
#### 3. Verify and Fix Python Version
If the virtual environment defaults to a different Python version (e.g., Python 3.13):
1. Verify the current Python version inside the virtual environment:
```bash
python --version
```
2. Use Python 3.11 explicitly within the virtual environment:
```bash
python3.11
```
3. Ensure the `python` command uses Python 3.11 by creating a symbolic link:
```bash
ln -sf $(which python3.11) $(dirname $(which python))/python
```
4. Verify the fix:
```bash
python --version
```
#### 4. Install Dependencies
Install the required packages:
```bash
pip install -r requirements.txt
```
---
### Additional Steps
#### 5. Run the Notebooks
Begin with `[1]_rag_setup_overview.ipynb` to get familiar with the setup process. Proceed sequentially through the other notebooks:
- `[1]_rag_setup_overview.ipynb`
- `[2]_rag_with_multi_query.ipynb`
- `[3]_rag_routing_and_query_construction.ipynb`
- `[4]_rag_indexing_and_advanced_retrieval.ipynb`
- `[5]_rag_retrieval_and_reranking.ipynb`
#### 6. Set Up Environment Variables
1. Duplicate the `.env.example` file in the root directory and rename it to `.env`.
2. Add the following keys (replace with your actual values):
```env
# LLM Model - Get key at https://platform.openai.com/api-keys
OPENAI_API_KEY="your-api-key"
# LangSmith - Get key at https://smith.langchain.com
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_API_KEY="your-api-key"
LANGCHAIN_PROJECT="your-project-name"
# Pinecone Vector Database - Get key at https://app.pinecone.io
PINECONE_INDEX_NAME="your-project-index"
PINECONE_API_HOST="your-host-url"
PINECONE_API_KEY="your-api-key"
# Cohere - Get key at https://dashboard.cohere.com/api-keys
COHERE_API_KEY=your-api-key
```
---
You're now ready to use the project!
## Usage
After setting up the environment and running the notebooks in sequence, you can:
1. **Experiment with Retrieval-Augmented Generation**:
Use the foundational setup in `[1]_rag_setup_overview.ipynb` to understand the basics of RAG.
2. **Implement Multi-Querying**:
Learn how to improve response relevance by introducing multi-querying techniques in `[2]_rag_with_multi_query.ipynb`.
## Star History
<a href="https://star-history.com/#bragai/brag-langchain&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=bragai/brag-langchain&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=bragai/brag-langchain&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=bragai/brag-langchain&type=Date" />
</picture>
</a>
## Contact
Do you have questions or want to collaborate? Please open an issue or email Taha Ababou at [email protected]
`If this project helps you, consider buying me a coffee ☕. Your support helps me keep contributing to the open-source community!`
<p>
<a href="https://buymeacoffee.com/bragai" target="_blank" rel="noopener noreferrer">
<img src="https://img.shields.io/badge/sponsor-30363D?style=for-the-badge&logo=GitHub-Sponsors&logoColor=#white" />
</a>
</p>
<br>
The notebooks and visual diagrams were inspired by Lance Martin's LangChain Tutorial. | {
"source": "bRAGAI/bRAG-langchain",
"title": "README.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/README.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 10051
} |
### Here are all the sources used to write-up the `[1]_rag_setup_overview.ipynb` file:
1. LangSmith Documentation: https://docs.smith.langchain.com/
2. RAG Quickstart: https://python.langchain.com/docs/tutorials/rag/
3. Count Tokens: https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb
4. Text Embedding Models: https://python.langchain.com/docs/integrations/text_embedding/openai
5. Cosine Similarity: https://platform.openai.com/docs/guides/embeddings/frequently-asked-questions
6. Document Loaders: https://python.langchain.com/docs/integrations/document_loaders/
7. Splitter: https://python.langchain.com/docs/how_to/recursive_text_splitter/
8. Vectorstores: https://python.langchain.com/docs/integrations/vectorstores/
9. RAG Chains: https://python.langchain.com/docs/how_to/sequence/
These links provide additional resources and documentation related to the concepts discussed in the file. | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[1]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[1]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 946
} |
### Here are all the sources used to write-up the `[2]_rag_with_multi_query.ipynb` file:
1. LangSmith Documentation: https://docs.smith.langchain.com/
2. Multi Query Retriever Documentation: https://python.langchain.com/docs/how_to/MultiQueryRetriever/
3. RAG Fusion Documentation: https://github.com/langchain-ai/langchain/blob/master/cookbook/rag_fusion.ipynb?ref=blog.langchain.dev
4. Forget RAG Blog: https://medium.com/towards-data-science/forget-rag-the-future-is-rag-fusion-1147298d8ad1
5. Research Papers:
1. **Least-To-Most Prompting Enables Complex Reasoning In Large Language Models**: https://arxiv.org/pdf/2205.10625.pdf
2. **Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions**: https://arxiv.org/abs/2212.10509.pdf
3. **Take A Step Back: Evoking Reasoning Via Abstraction In Large Language Models**: https://arxiv.org/pdf/2310.06117.pdf
4. **HyDE Paper**: https://arxiv.org/abs/2212.10496
6. HyDE Documentation: https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[2]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[2]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 1090
} |
### Here are all the sources used to write-up the `[3]_rag_routing_and_query_construction.ipynb` file:
1. https://docs.smith.langchain.com/ (LangSmith documentation)
2. https://python.langchain.com/docs/how_to/routing/ (Expression language & Embedding Router cookbook)
3. https://smith.langchain.com/public/c2ca61b4-3810-45d0-a156-3d6a73e9ee2a/r (Trace example)
4. https://smith.langchain.com/public/98c25405-2631-4de8-b12a-1891aded3359/r (Additional trace example)
5. https://blog.langchain.dev/query-construction/ (Query construction blog post)
6. https://blog.langchain.dev/enhancing-rag-based-applications-accuracy-by-constructing-and-leveraging-knowledge-graphs/ (Knowledge graphs in RAG)
7. https://python.langchain.com/v0.1/docs/use_cases/query_analysis/ (Query analysis documentation)
8. https://python.langchain.com/docs/how_to/self_query/ (Self-querying documentation)
These links provide additional resources and documentation related to the concepts discussed in the file. | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[3]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[3]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 986
} |
### Here are all the sources used to write-up the `[4]_rag_indexing_and_advanced_retrieval.ipynb` file:
1. https://www.youtube.com/watch?v=8OJC21T2SL4 (Greg Kamradt's video on document chunking)
2. https://docs.smith.langchain.com/ (LangSmith documentation)
3. https://blog.langchain.dev/semi-structured-multi-modal-rag/ (Semi-structured multi-modal RAG)
4. https://python.langchain.com/docs/how_to/multi_vector/ (Multi-vector retrievers)
5. https://arxiv.org/abs/2312.06648 (Dense X Retrieval research paper)
6. https://python.langchain.com/docs/how_to/parent_document_retriever/ (Parent Document Retriever documentation)
7. https://www.youtube.com/watch?v=jbGchdTL7d0 (LangChain video on advanced retrieval)
8. https://arxiv.org/pdf/2401.18059 (RAPTOR research paper)
9. https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb (RAPTOR implementation cookbook)
These links provide additional resources and documentation related to the concepts discussed in the file. | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[4]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[4]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 989
} |
### Here are all the sources used to write-up the `[5]_rag_retrieval_and_reranking.ipynb` file:
1. https://docs.smith.langchain.com/ (LangSmith documentation)
2. https://python.langchain.com/docs/integrations/retrievers/cohere-reranker#doing-reranking-with-coherererank (Cohere Re-Rank)
3. https://txt.cohere.com/rerank/ (Cohere Rerank documentation)
4. https://www.youtube.com/watch?v=E2shqsYwxck (CRAG Deep Dive)
5. https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_crag.ipynb (CRAG notebook)
6. https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_crag.ipynb (CRAG notebook)
7. https://github.com/langchain-ai/langgraph/tree/main/examples/rag (RAG examples)
8. https://www.youtube.com/watch?v=SsHUNfhF32s (Long context impact deep dive)
9. https://docs.google.com/presentation/d/1mJUiPBdtf58NfuSEQ7pVSEQ2Oqmek7F1i4gBwR6JDss/edit#slide=id.g26c0cb8dc66_0_0 (Slides)
These links provide additional resources and documentation related to the concepts discussed in the file. | {
"source": "bRAGAI/bRAG-langchain",
"title": "docs/[5]_sources.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/docs/[5]_sources.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 1022
} |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. | {
"source": "bRAGAI/bRAG-langchain",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 833
} |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. | {
"source": "bRAGAI/bRAG-langchain",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/bRAGAI/bRAG-langchain/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-11-16T07:41:36",
"stars": 2340,
"description": "Everything you need to know to build your own RAG application",
"file_size": 594
} |
<p align="center">
<img src="assets/logo.png" height=100>
</p>
<div align="center">
<a href="https://yuewen.cn/videos"><img src="https://img.shields.io/static/v1?label=Step-Video&message=Web&color=green"></a>  
<a href="https://arxiv.org/abs/2502.10248"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a>  
<a href="https://x.com/StepFun_ai"><img src="https://img.shields.io/static/v1?label=X.com&message=Web&color=blue"></a>  
</div>
<div align="center">
<a href="https://huggingface.co/stepfun-ai/stepvideo-t2v"><img src="https://img.shields.io/static/v1?label=Step-Video-T2V&message=HuggingFace&color=yellow"></a>  
<a href="https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo"><img src="https://img.shields.io/static/v1?label=Step-Video-T2V-Turbo&message=HuggingFace&color=yellow"></a>  
</div>
## 🔥🔥🔥 News!!
* Feb 17, 2025: 👋 We release the inference code and model weights of Step-Video-T2V. [Download](https://huggingface.co/stepfun-ai/stepvideo-t2v)
* Feb 17, 2025: 👋 We release the inference code and model weights of Step-Video-T2V-Turbo. [Download](https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo)
* Feb 17, 2025: 🎉 We have made our technical report available as open source. [Read](https://arxiv.org/abs/2502.10248)
## Video Demos
<table border="0" style="width: 100%; text-align: center; margin-top: 1px;">
<tr>
<td><video src="https://github.com/user-attachments/assets/9274b351-595d-41fb-aba3-f58e6e91603a" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/2f6b3ad5-e93b-436b-98bc-4701182d8652" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/67d20ee7-ad78-4b8f-80f6-3fdb00fb52d8" width="100%" controls autoplay loop muted></video></td>
</tr>
<tr>
<td><video src="https://github.com/user-attachments/assets/9abce409-105d-4a8a-ad13-104a98cc8a0b" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/8d1e1a47-048a-49ce-85f6-9d013f2d8e89" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/32cf4bd1-ec1f-4f77-a488-cd0284aa81bb" width="100%" controls autoplay loop muted></video></td>
</tr>
<tr>
<td><video src="https://github.com/user-attachments/assets/f95a7a49-032a-44ea-a10f-553d4e5d21c6" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/3534072e-87d9-4128-a87f-28fcb5d951e0" width="100%" controls autoplay loop muted></video></td>
<td><video src="https://github.com/user-attachments/assets/6d893dad-556d-4527-a882-666cba3d10e9" width="100%" controls autoplay loop muted></video></td>
</tr>
</table>
## Table of Contents
1. [Introduction](#1-introduction)
2. [Model Summary](#2-model-summary)
3. [Model Download](#3-model-download)
4. [Model Usage](#4-model-usage)
5. [Benchmark](#5-benchmark)
6. [Online Engine](#6-online-engine)
7. [Citation](#7-citation)
8. [Acknowledgement](#8-ackownledgement)
## 1. Introduction
We present **Step-Video-T2V**, a state-of-the-art (SoTA) text-to-video pre-trained model with 30 billion parameters and the capability to generate videos up to 204 frames. To enhance both training and inference efficiency, we propose a deep compression VAE for videos, achieving 16x16 spatial and 8x temporal compression ratios. Direct Preference Optimization (DPO) is applied in the final stage to further enhance the visual quality of the generated videos. Step-Video-T2V's performance is evaluated on a novel video generation benchmark, **Step-Video-T2V-Eval**, demonstrating its SoTA text-to-video quality compared to both open-source and commercial engines.
## 2. Model Summary
In Step-Video-T2V, videos are represented by a high-compression Video-VAE, achieving 16x16 spatial and 8x temporal compression ratios. User prompts are encoded using two bilingual pre-trained text encoders to handle both English and Chinese. A DiT with 3D full attention is trained using Flow Matching and is employed to denoise input noise into latent frames, with text embeddings and timesteps serving as conditioning factors. To further enhance the visual quality of the generated videos, a video-based DPO approach is applied, which effectively reduces artifacts and ensures smoother, more realistic video outputs.
<p align="center">
<img width="80%" src="assets/model_architecture.png">
</p>
### 2.1. Video-VAE
A deep compression Variational Autoencoder (VideoVAE) is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios while maintaining exceptional video reconstruction quality. This compression not only accelerates training and inference but also aligns with the diffusion process's preference for condensed representations.
<p align="center">
<img width="70%" src="assets/dcvae.png">
</p>
### 2.2. DiT w/ 3D Full Attention
Step-Video-T2V is built on the DiT architecture, which has 48 layers, each containing 48 attention heads, with each head’s dimension set to 128. AdaLN-Single is leveraged to incorporate the timestep condition, while QK-Norm in the self-attention mechanism is introduced to ensure training stability. Additionally, 3D RoPE is employed, playing a critical role in handling sequences of varying video lengths and resolutions.
<p align="center">
<img width="80%" src="assets/dit.png">
</p>
### 2.3. Video-DPO
In Step-Video-T2V, we incorporate human feedback through Direct Preference Optimization (DPO) to further enhance the visual quality of the generated videos. DPO leverages human preference data to fine-tune the model, ensuring that the generated content aligns more closely with human expectations. The overall DPO pipeline is shown below, highlighting its critical role in improving both the consistency and quality of the video generation process.
<p align="center">
<img width="100%" src="assets/dpo_pipeline.png">
</p>
## 3. Model Download
| Models | 🤗Huggingface | 🤖Modelscope |
|:-------:|:-------:|:-------:|
| Step-Video-T2V | [download](https://huggingface.co/stepfun-ai/stepvideo-t2v) | [download](https://www.modelscope.cn/models/stepfun-ai/stepvideo-t2v)
| Step-Video-T2V-Turbo (Inference Step Distillation) | [download](https://huggingface.co/stepfun-ai/stepvideo-t2v-turbo) | [download](https://www.modelscope.cn/models/stepfun-ai/stepvideo-t2v-turbo)
## 4. Model Usage
### 📜 4.1 Requirements
The following table shows the requirements for running Step-Video-T2V model (batch size = 1, w/o cfg distillation) to generate videos:
| Model | height/width/frame | Peak GPU Memory | 50 steps w flash-attn | 50 steps w/o flash-attn |
|:------------:|:------------:|:------------:|:------------:|:------------:|
| Step-Video-T2V | 768px768px204f | 78.55 GB | 860 s | 1437 s |
| Step-Video-T2V | 544px992px204f | 77.64 GB | 743 s | 1232 s |
| Step-Video-T2V | 544px992px136f | 72.48 GB | 408 s | 605 s |
* An NVIDIA GPU with CUDA support is required.
* The model is tested on four GPUs.
* **Recommended**: We recommend to use GPUs with 80GB of memory for better generation quality.
* Tested operating system: Linux
* The self-attention in text-encoder (step_llm) only supports CUDA capabilities sm_80 sm_86 and sm_90
### 🔧 4.2 Dependencies and Installation
- Python >= 3.10.0 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 2.3-cu121](https://pytorch.org/)
- [CUDA Toolkit](https://developer.nvidia.com/cuda-downloads)
- [FFmpeg](https://www.ffmpeg.org/)
```bash
git clone https://github.com/stepfun-ai/Step-Video-T2V.git
conda create -n stepvideo python=3.10
conda activate stepvideo
cd Step-Video-T2V
pip install -e .
pip install flash-attn --no-build-isolation ## flash-attn is optional
```
### 🚀 4.3 Inference Scripts
#### Multi-GPU Parallel Deployment
- We employed a decoupling strategy for the text encoder, VAE decoding, and DiT to optimize GPU resource utilization by DiT. As a result, a dedicated GPU is needed to handle the API services for the text encoder's embeddings and VAE decoding.
```bash
python api/call_remote_server.py --model_dir where_you_download_dir & ## We assume you have more than 4 GPUs available. This command will return the URL for both the caption API and the VAE API. Please use the returned URL in the following command.
parallel=4 # or parallel=8
url='127.0.0.1'
model_dir=where_you_download_dir
torchrun --nproc_per_node $parallel run_parallel.py --model_dir $model_dir --vae_url $url --caption_url $url --ulysses_degree $parallel --prompt "一名宇航员在月球上发现一块石碑,上面印有“stepfun”字样,闪闪发光" --infer_steps 50 --cfg_scale 9.0 --time_shift 13.0
```
#### Single-GPU Inference and Quantization
- The open-source project DiffSynth-Studio by ModelScope offers single-GPU inference and quantization support, which can significantly reduce the VRAM required. Please refer to [their examples](https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/stepvideo) for more information.
### 🚀 4.4 Best-of-Practice Inference settings
Step-Video-T2V exhibits robust performance in inference settings, consistently generating high-fidelity and dynamic videos. However, our experiments reveal that variations in inference hyperparameters can have a substantial effect on the trade-off between video fidelity and dynamics. To achieve optimal results, we recommend the following best practices for tuning inference parameters:
| Models | infer_steps | cfg_scale | time_shift | num_frames |
|:-------:|:-------:|:-------:|:-------:|:-------:|
| Step-Video-T2V | 30-50 | 9.0 | 13.0 | 204
| Step-Video-T2V-Turbo (Inference Step Distillation) | 10-15 | 5.0 | 17.0 | 204 |
## 5. Benchmark
We are releasing [Step-Video-T2V Eval](https://github.com/stepfun-ai/Step-Video-T2V/blob/main/benchmark/Step-Video-T2V-Eval) as a new benchmark, featuring 128 Chinese prompts sourced from real users. This benchmark is designed to evaluate the quality of generated videos across 11 distinct categories: Sports, Food, Scenery, Animals, Festivals, Combination Concepts, Surreal, People, 3D Animation, Cinematography, and Style.
## 6. Online Engine
The online version of Step-Video-T2V is available on [跃问视频](https://yuewen.cn/videos), where you can also explore some impressive examples.
## 7. Citation
```
@misc{ma2025stepvideot2vtechnicalreportpractice,
title={Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model},
author={Guoqing Ma and Haoyang Huang and Kun Yan and Liangyu Chen and Nan Duan and Shengming Yin and Changyi Wan and Ranchen Ming and Xiaoniu Song and Xing Chen and Yu Zhou and Deshan Sun and Deyu Zhou and Jian Zhou and Kaijun Tan and Kang An and Mei Chen and Wei Ji and Qiling Wu and Wen Sun and Xin Han and Yanan Wei and Zheng Ge and Aojie Li and Bin Wang and Bizhu Huang and Bo Wang and Brian Li and Changxing Miao and Chen Xu and Chenfei Wu and Chenguang Yu and Dapeng Shi and Dingyuan Hu and Enle Liu and Gang Yu and Ge Yang and Guanzhe Huang and Gulin Yan and Haiyang Feng and Hao Nie and Haonan Jia and Hanpeng Hu and Hanqi Chen and Haolong Yan and Heng Wang and Hongcheng Guo and Huilin Xiong and Huixin Xiong and Jiahao Gong and Jianchang Wu and Jiaoren Wu and Jie Wu and Jie Yang and Jiashuai Liu and Jiashuo Li and Jingyang Zhang and Junjing Guo and Junzhe Lin and Kaixiang Li and Lei Liu and Lei Xia and Liang Zhao and Liguo Tan and Liwen Huang and Liying Shi and Ming Li and Mingliang Li and Muhua Cheng and Na Wang and Qiaohui Chen and Qinglin He and Qiuyan Liang and Quan Sun and Ran Sun and Rui Wang and Shaoliang Pang and Shiliang Yang and Sitong Liu and Siqi Liu and Shuli Gao and Tiancheng Cao and Tianyu Wang and Weipeng Ming and Wenqing He and Xu Zhao and Xuelin Zhang and Xianfang Zeng and Xiaojia Liu and Xuan Yang and Yaqi Dai and Yanbo Yu and Yang Li and Yineng Deng and Yingming Wang and Yilei Wang and Yuanwei Lu and Yu Chen and Yu Luo and Yuchu Luo and Yuhe Yin and Yuheng Feng and Yuxiang Yang and Zecheng Tang and Zekai Zhang and Zidong Yang and Binxing Jiao and Jiansheng Chen and Jing Li and Shuchang Zhou and Xiangyu Zhang and Xinhao Zhang and Yibo Zhu and Heung-Yeung Shum and Daxin Jiang},
year={2025},
eprint={2502.10248},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2502.10248},
}
```
## 8. Acknowledgement
- We would like to express our sincere thanks to the [xDiT](https://github.com/xdit-project/xDiT) team for their invaluable support and parallelization strategy.
- Our code will be integrated into the official repository of [Huggingface/Diffusers](https://github.com/huggingface/diffusers).
- We thank the [FastVideo](https://github.com/hao-ai-lab/FastVideo) team for their continued collaboration and look forward to launching inference acceleration solutions together in the near future. | {
"source": "stepfun-ai/Step-Video-T2V",
"title": "README.md",
"url": "https://github.com/stepfun-ai/Step-Video-T2V/blob/main/README.md",
"date": "2025-02-08T08:46:51",
"stars": 2324,
"description": null,
"file_size": 13151
} |
# Contributing to openpi
We welcome contributions, improvements, and modifications. Everyone is welcome to use openpi in accordance to the [license](LICENSE). Contributors are also welcome to submit bug reports, feature requests, and pull requests. We can't promise to approve every pull request, and we are a small team with limited bandwidth to review all requests, but we'll give it our best effort. Specifics are described below.
## Issues and feature requests
You are welcome to use the Github [discussion](https://github.com/Physical-Intelligence/openpi/discussions) feature if you would like to discuss something that is not directly reporting an issue or making a feature request. This is suitable for questions about how to use some aspect of openpi, or other topics.
If you found a bug or other issue, please first check that the issue was not already reported (use the search bar on Github under Issues). If the issue has not yet been reported, please include this information when filing a Github issue:
- Your OS type and version and the version of Python you are using
- Code that allows us to reproduce your bug, including all dependencies
- Traceback of any exception
- Any other information that would help us, such as a screenshot
In order for us to address any issue, we must be able to reproduce it, so if you encountered the issue after making modifications to openpi, please reproduce the issue without any other modifications and provide a code snippet that allows us to quickly reproduce the problem on `main`.
If you would like to submit a feature request, please check that the feature request does not already exist, and please provide the following information:
- The motivation for the feature
- A description of the problem you are trying to solve or your use case
- Enough information for us to understand the nature of the request
- Some information for how you intend to use it (this might help us in understanding the motivation!)
We can't promise to support every feature request, but it is helpful to us to know the use cases that you are interested in!
## Submitting a pull request
If you implemented support for a new robot or environment, or some other new feature, we welcome pull requests (PRs) to openpi. We encourage you to create a [feature request](https://github.com/Physical-Intelligence/openpi/issues) or make a post on the [discussion](https://github.com/Physical-Intelligence/openpi/discussions) board before starting to work on your PR, if you would like to get a sense for whether we are likely to approve your PR if it is submitted. Since we are a small team with limited ability to provide maintenance and support, we may not accept all PRs (e.g., if we believe it would make the code harder to maintain, or if reviewing the PR is out of scope for us), so contacting us in advance is a good way to get a sense for whether your PR is likely to get approved for merging into openpi directly. But even if it isn't, you are of course more than welcome to maintain your own fork with whatever modifications you would like. When creating PRs, we recommend every contribution to consider the following:
- Make sure that your PR has a clear title and description
- Run `pre-commit` (install using `pre-commit install` first), and run `ruff check .` and `ruff format .`
- Make sure your PR passes all tests | {
"source": "Physical-Intelligence/openpi",
"title": "CONTRIBUTING.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/CONTRIBUTING.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 3363
} |
# openpi
openpi holds open-source models and packages for robotics, published by the [Physical Intelligence team](https://www.physicalintelligence.company/).
Currently, this repo contains two types of models:
- the [π₀ model](https://www.physicalintelligence.company/blog/pi0), a flow-based diffusion vision-language-action model (VLA)
- the [π₀-FAST model](https://www.physicalintelligence.company/research/fast), an autoregressive VLA, based on the FAST action tokenizer.
For both models, we provide _base model_ checkpoints, pre-trained on 10k+ hours of robot data, and examples for using them out of the box or fine-tuning them to your own datasets.
This is an experiment: $\pi_0$ was developed for our own robots, which differ from the widely used platforms such as [ALOHA](https://tonyzhaozh.github.io/aloha/) and [DROID](https://droid-dataset.github.io/), and though we are optimistic that researchers and practitioners will be able to run creative new experiments adapting $\pi_0$ to their own platforms, we do not expect every such attempt to be successful. All this is to say: $\pi_0$ may or may not work for you, but you are welcome to try it and see!
## Requirements
To run the models in this repository, you will need an NVIDIA GPU with at least the following specifications. These estimations assume a single GPU, but you can also use multiple GPUs with model parallelism to reduce per-GPU memory requirements by configuring `fsdp_devices` in the training config. Please also note that the current training script does not yet support multi-node training.
| Mode | Memory Required | Example GPU |
| ------------------ | --------------- | ------------------ |
| Inference | > 8 GB | RTX 4090 |
| Fine-Tuning (LoRA) | > 22.5 GB | RTX 4090 |
| Fine-Tuning (Full) | > 70 GB | A100 (80GB) / H100 |
The repo has been tested with Ubuntu 22.04, we do not currently support other operating systems.
## Installation
When cloning this repo, make sure to update submodules:
```bash
git clone --recurse-submodules [email protected]:Physical-Intelligence/openpi.git
# Or if you already cloned the repo:
git submodule update --init --recursive
```
We use [uv](https://docs.astral.sh/uv/) to manage Python dependencies. See the [uv installation instructions](https://docs.astral.sh/uv/getting-started/installation/) to set it up. Once uv is installed, run the following to set up the environment:
```bash
GIT_LFS_SKIP_SMUDGE=1 uv sync
```
NOTE: `GIT_LFS_SKIP_SMUDGE=1` is needed to pull LeRobot as a dependency.
**Docker**: As an alternative to uv installation, we provide instructions for installing openpi using Docker. If you encounter issues with your system setup, consider using Docker to simplify installation. See [Docker Setup](docs/docker.md) for more details.
## Model Checkpoints
### Base Models
We provide multiple base VLA model checkpoints. These checkpoints have been pre-trained on 10k+ hours of robot data, and can be used for fine-tuning.
| Model | Use Case | Description | Checkpoint Path |
| ------------ | ----------- | ----------------------------------------------------------------------------------------------------------- | ---------------------------------------------- |
| $\pi_0$ | Fine-Tuning | Base diffusion [π₀ model](https://www.physicalintelligence.company/blog/pi0) for fine-tuning | `s3://openpi-assets/checkpoints/pi0_base` |
| $\pi_0$-FAST | Fine-Tuning | Base autoregressive [π₀-FAST model](https://www.physicalintelligence.company/research/fast) for fine-tuning | `s3://openpi-assets/checkpoints/pi0_fast_base` |
### Fine-Tuned Models
We also provide "expert" checkpoints for various robot platforms and tasks. These models are fine-tuned from the base models above and intended to run directly on the target robot. These may or may not work on your particular robot. Since these checkpoints were fine-tuned on relatively small datasets collected with more widely available robots, such as ALOHA and the DROID Franka setup, they might not generalize to your particular setup, though we found some of these, especially the DROID checkpoint, to generalize quite broadly in practice.
| Model | Use Case | Description | Checkpoint Path |
| ------------------------ | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------- |
| $\pi_0$-FAST-DROID | Inference | $\pi_0$-FAST model fine-tuned on the [DROID dataset](https://droid-dataset.github.io/), can perform a wide range of simple table-top manipulation tasks 0-shot in new scenes on the DROID robot platform | `s3://openpi-assets/checkpoints/pi0_fast_droid` |
| $\pi_0$-DROID | Fine-Tuning | $\pi_0$ model fine-tuned on the [DROID dataset](https://droid-dataset.github.io/), faster inference than $\pi_0$-FAST-DROID, but may not follow language commands as well | `s3://openpi-assets/checkpoints/pi0_droid` |
| $\pi_0$-ALOHA-towel | Inference | $\pi_0$ model fine-tuned on internal ALOHA data, can fold diverse towels 0-shot on [ALOHA](https://tonyzhaozh.github.io/aloha/) robot platforms | `s3://openpi-assets/checkpoints/pi0_aloha_towel` |
| $\pi_0$-ALOHA-tupperware | Inference | $\pi_0$ model fine-tuned on internal ALOHA data, can unpack food from a tupperware container | `s3://openpi-assets/checkpoints/pi0_aloha_tupperware` |
| $\pi_0$-ALOHA-pen-uncap | Inference | $\pi_0$ model fine-tuned on [public ALOHA data](https://dit-policy.github.io/), can uncap a pen | `s3://openpi-assets/checkpoints/pi0_aloha_pen_uncap` |
By default, checkpoints are automatically downloaded from `s3://openpi-assets` and are cached in `~/.cache/openpi` when needed. You can overwrite the download path by setting the `OPENPI_DATA_HOME` environment variable.
## Running Inference for a Pre-Trained Model
Our pre-trained model checkpoints can be run with a few lines of code (here our $\pi_0$-FAST-DROID model):
```python
from openpi.training import config
from openpi.policies import policy_config
from openpi.shared import download
config = config.get_config("pi0_fast_droid")
checkpoint_dir = download.maybe_download("s3://openpi-assets/checkpoints/pi0_fast_droid")
# Create a trained policy.
policy = policy_config.create_trained_policy(config, checkpoint_dir)
# Run inference on a dummy example.
example = {
"observation/exterior_image_1_left": ...,
"observation/wrist_image_left": ...,
...
"prompt": "pick up the fork"
}
action_chunk = policy.infer(example)["actions"]
```
You can also test this out in the [example notebook](examples/inference.ipynb).
We provide detailed step-by-step examples for running inference of our pre-trained checkpoints on [DROID](examples/droid/README.md) and [ALOHA](examples/aloha_real/README.md) robots.
**Remote Inference**: We provide [examples and code](docs/remote_inference.md) for running inference of our models **remotely**: the model can run on a different server and stream actions to the robot via a websocket connection. This makes it easy to use more powerful GPUs off-robot and keep robot and policy environments separate.
**Test inference without a robot**: We provide a [script](examples/simple_client/README.md) for testing inference without a robot. This script will generate a random observation and run inference with the model. See [here](examples/simple_client/README.md) for more details.
## Fine-Tuning Base Models on Your Own Data
We will fine-tune the $\pi_0$-FAST model on the [Libero dataset](https://libero-project.github.io/datasets) as a running example for how to fine-tune a base model on your own data. We will explain three steps:
1. Convert your data to a LeRobot dataset (which we use for training)
2. Defining training configs and running training
3. Spinning up a policy server and running inference
### 1. Convert your data to a LeRobot dataset
We provide a minimal example script for converting Libero data to a LeRobot dataset in [`examples/libero/convert_libero_data_to_lerobot.py`](examples/libero/convert_libero_data_to_lerobot.py). You can easily modify it to convert your own data! You can download the raw Libero dataset from [here](https://huggingface.co/datasets/openvla/modified_libero_rlds), and run the script with:
```bash
uv run examples/libero/convert_libero_data_to_lerobot.py --data_dir /path/to/your/libero/data
```
### 2. Defining training configs and running training
To fine-tune a base model on your own data, you need to define configs for data processing and training. We provide example configs with detailed comments for Libero below, which you can modify for your own dataset:
- [`LiberoInputs` and `LiberoOutputs`](src/openpi/policies/libero_policy.py): Defines the data mapping from the Libero environment to the model and vice versa. Will be used for both, training and inference.
- [`LeRobotLiberoDataConfig`](src/openpi/training/config.py): Defines how to process raw Libero data from LeRobot dataset for training.
- [`TrainConfig`](src/openpi/training/config.py): Defines fine-tuning hyperparameters, data config, and weight loader.
We provide example fine-tuning configs for both, [π₀](src/openpi/training/config.py) and [π₀-FAST](src/openpi/training/config.py) on Libero data.
Before we can run training, we need to compute the normalization statistics for the training data. Run the script below with the name of your training config:
```bash
uv run scripts/compute_norm_stats.py --config-name pi0_fast_libero
```
Now we can kick off training with the following command (the `--overwrite` flag is used to overwrite existing checkpoints if you rerun fine-tuning with the same config):
```bash
XLA_PYTHON_CLIENT_MEM_FRACTION=0.9 uv run scripts/train.py pi0_fast_libero --exp-name=my_experiment --overwrite
```
The command will log training progress to the console and save checkpoints to the `checkpoints` directory. You can also monitor training progress on the Weights & Biases dashboard. For maximally using the GPU memory, set `XLA_PYTHON_CLIENT_MEM_FRACTION=0.9` before running training -- this enables JAX to use up to 90% of the GPU memory (vs. the default of 75%).
### 3. Spinning up a policy server and running inference
Once training is complete, we can run inference by spinning up a policy server and then querying it from a Libero evaluation script. Launching a model server is easy (we use the checkpoint for iteration 20,000 for this example, modify as needed):
```bash
uv run scripts/serve_policy.py policy:checkpoint --policy.config=pi0_fast_libero --policy.dir=checkpoints/pi0_fast_libero/my_experiment/20000
```
This will spin up a server that listens on port 8000 and waits for observations to be sent to it. We can then run the Libero evaluation script to query the server. For instructions how to install Libero and run the evaluation script, see the [Libero README](examples/libero/README.md).
### More Examples
We provide more examples for how to fine-tune and run inference with our models on the ALOHA platform in the following READMEs:
- [ALOHA Simulator](examples/aloha_sim)
- [ALOHA Real](examples/aloha_real)
## Troubleshooting
We will collect common issues and their solutions here. If you encounter an issue, please check here first. If you can't find a solution, please file an issue on the repo (see [here](CONTRIBUTING.md) for guidelines).
| Issue | Resolution |
| ----------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `uv sync` fails with dependency conflicts | Try removing the virtual environment directory (`rm -rf .venv`) and running `uv sync` again. If issues persist, check that you have the latest version of `uv` installed (`uv self update`). |
| Training runs out of GPU memory | Make sure you set `XLA_PYTHON_CLIENT_MEM_FRACTION=0.9` before running training to allow JAX to use more GPU memory. You can also try reducing the batch size in your training config. |
| Policy server connection errors | Check that the server is running and listening on the expected port. Verify network connectivity and firewall settings between client and server. |
| Missing norm stats error when training | Run `scripts/compute_norm_stats.py` with your config name before starting training. |
| Dataset download fails | Check your internet connection. If using `local_files_only=True`, verify the dataset exists locally. For HuggingFace datasets, ensure you're logged in (`huggingface-cli login`). |
| CUDA/GPU errors | Verify NVIDIA drivers and CUDA toolkit are installed correctly. For Docker, ensure nvidia-container-toolkit is installed. Check GPU compatibility. |
| Import errors when running examples | Make sure you've installed all dependencies with `uv sync` and activated the virtual environment. Some examples may have additional requirements listed in their READMEs. |
| Action dimensions mismatch | Verify your data processing transforms match the expected input/output dimensions of your robot. Check the action space definitions in your policy classes. | | {
"source": "Physical-Intelligence/openpi",
"title": "README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 14641
} |
### Docker Setup
All of the examples in this repo provide instructions for being run normally, and also using Docker. Although not required, the Docker option is recommended as this will simplify software installation, produce a more stable environment, and also allow you to avoid installing ROS and cluttering your machine, for examples which depend on ROS.
- Basic Docker installation instructions are [here](https://docs.docker.com/engine/install/).
- Docker must be installed in [rootless mode](https://docs.docker.com/engine/security/rootless/).
- To use your GPU you must also install the [NVIDIA container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
- The version of docker installed with `snap` is incompatible with the NVIDIA container toolkit, preventing it from accessing `libnvidia-ml.so` ([issue](https://github.com/NVIDIA/nvidia-container-toolkit/issues/154)). The snap version can be uninstalled with `sudo snap remove docker`.
- Docker Desktop is also incompatible with the NVIDIA runtime ([issue](https://github.com/NVIDIA/nvidia-container-toolkit/issues/229)). Docker Desktop can be uninstalled with `sudo apt remove docker-desktop`.
If starting from scratch and your host machine is Ubuntu 22.04, you can use accomplish all of the above with the convenience scripts `scripts/docker/install_docker_ubuntu22.sh` and `scripts/docker/install_nvidia_container_toolkit.sh`.
During the first run of any example, Docker will build the images. Go grab a coffee while this happens. Subsequent runs will be faster since the images are cached. | {
"source": "Physical-Intelligence/openpi",
"title": "docs/docker.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/docs/docker.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 1615
} |
# Running openpi models remotely
We provide utilities for running openpi models remotely. This is useful for running inference on more powerful GPUs off-robot, and also helps keep the robot and policy environments separate (and e.g. avoid dependency hell with robot software).
## Starting a remote policy server
To start a remote policy server, you can simply run the following command:
```bash
uv run scripts/serve_policy.py --env=[DROID | ALOHA | LIBERO]
```
The `env` argument specifies which $\pi_0$ checkpoint should be loaded. Under the hood, this script will execute a command like the following, which you can use to start a policy server, e.g. for checkpoints you trained yourself (here an example for the DROID environment):
```bash
uv run scripts/serve_policy.py policy:checkpoint --policy.config=pi0_fast_droid --policy.dir=s3://openpi-assets/checkpoints/pi0_fast_droid
```
This will start a policy server that will serve the policy specified by the `config` and `dir` arguments. The policy will be served on the specified port (default: 8000).
## Querying the remote policy server from your robot code
We provide a client utility with minimal dependencies that you can easily embed into any robot codebase.
First, install the `openpi-client` package in your robot environment:
```bash
cd $OPENPI_ROOT/packages/openpi-client
pip install -e .
```
Then, you can use the client to query the remote policy server from your robot code. Here's an example of how to do this:
```python
from openpi_client import websocket_client_policy
policy_client = websocket_client_policy.WebsocketClientPolicy(host="10.32.255.0", port=8000)
action_chunk = policy_client.infer(example)["actions"]
```
Here, the `host` and `port` arguments specify the IP address and port of the remote policy server. You can also specify these as command-line arguments to your robot code, or hard-code them in your robot codebase. The `example` is a dictionary of observations and the prompt, following the specification of the policy inputs for the policy you are serving. We have concrete examples of how to construct this dictionary for different environments in the [simple client example](examples/simple_client/main.py). | {
"source": "Physical-Intelligence/openpi",
"title": "docs/remote_inference.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/docs/remote_inference.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 2216
} |
# Run Aloha (Real Robot)
This example demonstrates how to run with a real robot using an [ALOHA setup](https://github.com/tonyzhaozh/aloha). See [here](../../docs/remote_inference.md) for instructions on how to load checkpoints and run inference. We list the relevant checkpoint paths for each provided fine-tuned model below.
## Prerequisites
This repo uses a fork of the ALOHA repo, with very minor modifications to use Realsense cameras.
1. Follow the [hardware installation instructions](https://github.com/tonyzhaozh/aloha?tab=readme-ov-file#hardware-installation) in the ALOHA repo.
1. Modify the `third_party/aloha/aloha_scripts/realsense_publisher.py` file to use serial numbers for your cameras.
## With Docker
```bash
export SERVER_ARGS="--env ALOHA --default_prompt='take the toast out of the toaster'"
docker compose -f examples/aloha_real/compose.yml up --build
```
## Without Docker
Terminal window 1:
```bash
# Create virtual environment
uv venv --python 3.10 examples/aloha_real/.venv
source examples/aloha_real/.venv/bin/activate
uv pip sync examples/aloha_real/requirements.txt
uv pip install -e packages/openpi-client
# Run the robot
python examples/aloha_real/main.py
```
Terminal window 2:
```bash
roslaunch --wait aloha ros_nodes.launch
```
Terminal window 3:
```bash
uv run scripts/serve_policy.py --env ALOHA --default_prompt='take the toast out of the toaster'
```
## **ALOHA Checkpoint Guide**
The `pi0_base` model can be used in zero shot for a simple task on the ALOHA platform, and we additionally provide two example fine-tuned checkpoints, “fold the towel” and “open the tupperware and put the food on the plate,” which can perform more advanced tasks on the ALOHA.
While we’ve found the policies to work in unseen conditions across multiple ALOHA stations, we provide some pointers here on how best to set up scenes to maximize the chance of policy success. We cover the prompts to use for the policies, objects we’ve seen it work well on, and well-represented initial state distributions. Running these policies in zero shot is still a very experimental feature, and there is no guarantee that they will work on your robot. The recommended way to use `pi0_base` is by finetuning with data from the target robot.
---
### **Toast Task**
This task involves the robot taking two pieces of toast out of a toaster and placing them on a plate.
- **Checkpoint path**: `s3://openpi-assets/checkpoints/pi0_base`
- **Prompt**: "take the toast out of the toaster"
- **Objects needed**: Two pieces of toast, a plate, and a standard toaster.
- **Object Distribution**:
- Works on both real toast and rubber fake toast
- Compatible with standard 2-slice toasters
- Works with plates of varying colors
### **Scene Setup Guidelines**
<img width="500" alt="Screenshot 2025-01-31 at 10 06 02 PM" src="https://github.com/user-attachments/assets/3d043d95-9d1c-4dda-9991-e63cae61e02e" />
- The toaster should be positioned in the top-left quadrant of the workspace.
- Both pieces of toast should start inside the toaster, with at least 1 cm of bread sticking out from the top.
- The plate should be placed roughly in the lower-center of the workspace.
- Works with both natural and synthetic lighting, but avoid making the scene too dark (e.g., don't place the setup inside an enclosed space or under a curtain).
### **Towel Task**
This task involves folding a small towel (e.g., roughly the size of a hand towel) into eighths.
- **Checkpoint path**: `s3://openpi-assets/checkpoints/pi0_aloha_towel`
- **Prompt**: "fold the towel"
- **Object Distribution**:
- Works on towels of varying solid colors
- Performance is worse on heavily textured or striped towels
### **Scene Setup Guidelines**
<img width="500" alt="Screenshot 2025-01-31 at 10 01 15 PM" src="https://github.com/user-attachments/assets/9410090c-467d-4a9c-ac76-96e5b4d00943" />
- The towel should be flattened and roughly centered on the table.
- Choose a towel that does not blend in with the table surface.
### **Tupperware Task**
This task involves opening a tupperware filled with food and pouring the contents onto a plate.
- **Checkpoint path**: `s3://openpi-assets/checkpoints/pi0_aloha_tupperware`
- **Prompt**: "open the tupperware and put the food on the plate"
- **Objects needed**: Tupperware, food (or food-like items), and a plate.
- **Object Distribution**:
- Works on various types of fake food (e.g., fake chicken nuggets, fries, and fried chicken).
- Compatible with tupperware of different lid colors and shapes, with best performance on square tupperware with a corner flap (see images below).
- The policy has seen plates of varying solid colors.
### **Scene Setup Guidelines**
<img width="500" alt="Screenshot 2025-01-31 at 10 02 27 PM" src="https://github.com/user-attachments/assets/60fc1de0-2d64-4076-b903-f427e5e9d1bf" />
- Best performance observed when both the tupperware and plate are roughly centered in the workspace.
- Positioning:
- Tupperware should be on the left.
- Plate should be on the right or bottom.
- The tupperware flap should point toward the plate.
## Training on your own Aloha dataset
1. Convert the dataset to the LeRobot dataset v2.0 format.
We provide a script [convert_aloha_data_to_lerobot.py](./convert_aloha_data_to_lerobot.py) that converts the dataset to the LeRobot dataset v2.0 format. As an example we have converted the `aloha_pen_uncap_diverse_raw` dataset from the [BiPlay repo](https://huggingface.co/datasets/oier-mees/BiPlay/tree/main/aloha_pen_uncap_diverse_raw) and uploaded it to the HuggingFace Hub as [physical-intelligence/aloha_pen_uncap_diverse](https://huggingface.co/datasets/physical-intelligence/aloha_pen_uncap_diverse).
2. Define a training config that uses the custom dataset.
We provide the [pi0_aloha_pen_uncap config](../../src/openpi/training/config.py) as an example. You should refer to the root [README](../../README.md) for how to run training with the new config.
IMPORTANT: Our base checkpoint includes normalization stats from various common robot configurations. When fine-tuning a base checkpoint with a custom dataset from one of these configurations, we recommend using the corresponding normalization stats provided in the base checkpoint. In the example, this is done by specifying the trossen asset_id and a path to the pretrained checkpoint’s asset directory within the AssetsConfig. | {
"source": "Physical-Intelligence/openpi",
"title": "examples/aloha_real/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/aloha_real/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 6501
} |
# Run Aloha Sim
## With Docker
```bash
export SERVER_ARGS="--env ALOHA_SIM"
docker compose -f examples/aloha_sim/compose.yml up --build
```
## Without Docker
Terminal window 1:
```bash
# Create virtual environment
uv venv --python 3.10 examples/aloha_sim/.venv
source examples/aloha_sim/.venv/bin/activate
uv pip sync examples/aloha_sim/requirements.txt
uv pip install -e packages/openpi-client
# Run the simulation
MUJOCO_GL=egl python examples/aloha_sim/main.py
```
Note: If you are seeing EGL errors, you may need to install the following dependencies:
```bash
sudo apt-get install -y libegl1-mesa-dev libgles2-mesa-dev
```
Terminal window 2:
```bash
# Run the server
uv run scripts/serve_policy.py --env ALOHA_SIM
``` | {
"source": "Physical-Intelligence/openpi",
"title": "examples/aloha_sim/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/aloha_sim/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 731
} |
# Run DROID
This example shows how to run the fine-tuned $\pi_0$-FAST-DROID model on the [DROID robot platform](https://github.com/droid-dataset/droid). We also offer a $\pi_0$-DROID model that is fine-tuned from $\pi_0$ and uses flow action decoding. You can use it by replacing `pi0_fast_droid` with `pi0_droid` in the commands below. In practice, we find that out-of-the-box, the $\pi_0$-FAST-DROID model is better at following language commands, so we recommend it as the default checkpoint for DROID evaluation. If you want to fine-tune on a DROID task that requires a fast-to-inference policy, you may still want to consider using the $\pi_0$-DROID model, since it decodes faster. For more details, please see the [FAST paper](https://pi.website/research/fast).
## Step 1: Start a policy server
Since the DROID control laptop does not have a powerful GPU, we will start a remote policy server on a different machine with a more powerful GPU and then query it from the DROID control laptop during inference.
1. On a machine with a powerful GPU (~NVIDIA 4090), clone and install the `openpi` repository following the instructions in the [README](https://github.com/Physical-Intelligence/openpi).
2. Start the OpenPI server via the following command:
```bash
uv run scripts/serve_policy.py policy:checkpoint --policy.config=pi0_fast_droid --policy.dir=s3://openpi-assets/checkpoints/pi0_fast_droid
```
You can also run the equivalent command below:
```bash
uv run scripts/serve_policy.py --env=DROID
```
## Step 2: Run the DROID robot
1. Make sure you have the most recent version of the DROID package installed on both the DROID control laptop and the NUC.
2. On the control laptop, activate your DROID conda environment.
3. Clone the openpi repo and install the openpi client, which we will use to connect to the policy server (this has very few dependencies and should be very fast to install): with the DROID conda environment activated, run `cd $OPENPI_ROOT/packages/openpi-client && pip install -e .`.
4. Install `tyro`, which we will use for command line parsing: `pip install tyro`.
5. Copy the `main.py` file from this directory to the `$DROID_ROOT/scripts` directory.
6. Replace the camera IDs in the `main.py` file with the IDs of your cameras (you can find the camera IDs by running `ZED_Explore` in the command line, which will open a tool that shows you all connected cameras and their IDs -- you can also use it to make sure that the cameras are well-positioned to see the scene you want the robot to interact with).
7. Run the `main.py` file. Make sure to point the IP and host address to the policy server. (To make sure the server machine is reachable from the DROID laptop, you can run `ping <server_ip>` from the DROID laptop.) Also make sure to specify the external camera to use for the policy (we only input one external camera), choose from ["left", "right"].
```bash
python3 scripts/main.py --remote_host=<server_ip> --remote_port=<server_port> --external_camera="left"
```
The script will ask you to enter a free-form language instruction for the robot to follow. Make sure to point the cameras at the scene you want the robot to interact with. You _do not_ need to carefully control camera angle, object positions, etc. The policy is fairly robust in our experience. Happy prompting!
# Troubleshooting
| Issue | Solution |
|-------|----------|
| Cannot reach policy server | Make sure the server is running and the IP and port are correct. You can check that the server machine is reachable by running `ping <server_ip>` from the DROID laptop. |
| Cannot find cameras | Make sure the camera IDs are correct and that the cameras are connected to the DROID laptop. Sometimes replugging the cameras can help. You can check all connected cameras by running `ZED_Explore` in the command line. |
| Policy inference is slow / inconsistent | Try using a wired internet connection for the DROID laptop to reduce latency (0.5 - 1 sec latency per chunk is normal). |
| Policy does not perform the task well | In our experiments, the policy could perform simple table top manipulation tasks (pick-and-place) across a wide range of environments, camera positions, and lighting conditions. If the policy does not perform the task well, you can try modifying the scene or object placement to make the task easier. Also make sure that the camera view you are passing to the policy can see all relevant objects in the scene (the policy is only conditioned on a single external camera + wrist camera, make sure you are feeding the desired camera to the policy). Use `ZED_Explore` to check that the camera view you are passing to the policy can see all relevant objects in the scene. Finally, the policy is far from perfect and will fail on more complex manipulation tasks, but it usually makes a decent effort. :) | | {
"source": "Physical-Intelligence/openpi",
"title": "examples/droid/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/droid/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 4841
} |
# LIBERO Benchmark
This example runs the LIBERO benchmark: https://github.com/Lifelong-Robot-Learning/LIBERO
Note: When updating requirements.txt in this directory, there is an additional flag `--extra-index-url https://download.pytorch.org/whl/cu113` that must be added to the `uv pip compile` command.
This example requires git submodules to be initialized. Don't forget to run:
```bash
git submodule update --init --recursive
```
## With Docker
```bash
# Grant access to the X11 server:
sudo xhost +local:docker
export SERVER_ARGS="--env LIBERO"
docker compose -f examples/libero/compose.yml up --build
```
## Without Docker
Terminal window 1:
```bash
# Create virtual environment
uv venv --python 3.8 examples/libero/.venv
source examples/libero/.venv/bin/activate
uv pip sync examples/libero/requirements.txt third_party/libero/requirements.txt --extra-index-url https://download.pytorch.org/whl/cu113 --index-strategy=unsafe-best-match
uv pip install -e packages/openpi-client
uv pip install -e third_party/libero
export PYTHONPATH=$PYTHONPATH:$PWD/third_party/libero
# Run the simulation
python examples/libero/main.py
```
Terminal window 2:
```bash
# Run the server
uv run scripts/serve_policy.py --env LIBERO
```
## Results
If you follow the training instructions and hyperparameters in the `pi0_libero` and `pi0_fast_libero` configs, you should get results similar to the following:
| Model | Libero Spatial | Libero Object | Libero Goal | Libero 10 | Average |
|-------|---------------|---------------|-------------|-----------|---------|
| π0-FAST @ 30k (finetuned) | 96.4 | 96.8 | 88.6 | 60.2 | 85.5 |
| π0 @ 30k (finetuned) | 96.8 | 98.8 | 95.8 | 85.2 | 94.15 |
Note that the hyperparameters for these runs are not tuned and $\pi_0$-FAST does not use a FAST tokenizer optimized for Libero. Likely, the results could be improved with more tuning, we mainly use these results as an example of how to use openpi to fine-tune $\pi_0$ models on a new dataset. | {
"source": "Physical-Intelligence/openpi",
"title": "examples/libero/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/libero/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 1985
} |
# Simple Client
A minimal client that sends observations to the server and prints the inference rate.
You can specify which runtime environment to use using the `--env` flag. You can see the available options by running:
```bash
uv run examples/simple_client/main.py --help
```
## With Docker
```bash
export SERVER_ARGS="--env ALOHA_SIM"
docker compose -f examples/simple_client/compose.yml up --build
```
## Without Docker
Terminal window 1:
```bash
uv run examples/simple_client/main.py --env DROID
```
Terminal window 2:
```bash
uv run scripts/serve_policy.py --env DROID
``` | {
"source": "Physical-Intelligence/openpi",
"title": "examples/simple_client/README.md",
"url": "https://github.com/Physical-Intelligence/openpi/blob/main/examples/simple_client/README.md",
"date": "2024-10-21T15:23:28",
"stars": 2297,
"description": null,
"file_size": 588
} |
<div align="center">
<img src="https://github.com/user-attachments/assets/a4ccbc60-5248-4dca-8cec-09a6385c6d0f" width="768" height="192">
</div>
<strong>ClearerVoice-Studio</strong> is an open-source, AI-powered speech processing toolkit designed for researchers, developers, and end-users. It provides capabilities of speech enhancement, speech separation, speech super-resolution, target speaker extraction, and more. The toolkit provides state-of-the-art pre-trained models, along with training and inference scripts, all accessible from this repository.
#### 👉🏻[HuggingFace Demo](https://huggingface.co/spaces/alibabasglab/ClearVoice)👈🏻 | 👉🏻[ModelScope Demo](https://modelscope.cn/studios/iic/ClearerVoice-Studio) | 👉🏻[SpeechScore Demo](https://huggingface.co/spaces/alibabasglab/SpeechScore)👈🏻
---
 Please leave your ⭐ on our GitHub to support this community project!
记得点击右上角的星星⭐来支持我们一下,您的支持是我们更新模型的最大动力!
## News :fire:
- Upcoming: More tasks will be added to ClearVoice.
- [2025.1] ClearVoice demo is ready for try on both [HuggingFace](https://huggingface.co/spaces/alibabasglab/ClearVoice) and [ModelScope](https://modelscope.cn/studios/iic/ClearerVoice-Studio). However, HuggingFace has limited GPU usage, and ModelScope has more GPU usage quota.
- [2025.1] ClearVoice now offers **speech super-resolution**, also known as bandwidth extension. This feature improves the perceptual quality of speech by converting low-resolution audio (with an effective sampling rate of at least 16,000 Hz) into high-resolution audio with a sampling rate of 48,000 Hz. A full upscaled **LJSpeech-1.1-48kHz dataset** can be downloaded from [HuggingFace](https://huggingface.co/datasets/alibabasglab/LJSpeech-1.1-48kHz) and [ModelScope](https://modelscope.cn/datasets/iic/LJSpeech-1.1-48kHz).
- [2025.1] ClearVoice now supports more audio formats including **"wav", "aac", "ac3", "aiff", "flac", "m4a", "mp3", "ogg", "opus", "wma", "webm"**, etc. It also supports both mono and stereo channels with 16-bit or 32-bit precisions. A latest version of [ffmpeg](https://github.com/FFmpeg/FFmpeg) is required for audio codecs.
- [2024.12] Upload pre-trained models on ModelScope. User now can download the models from either [ModelScope](https://www.modelscope.cn/models/iic/ClearerVoice-Studio/summary) or [Huggingface](https://huggingface.co/alibabasglab)
- [2024.11] Our FRCRN speech denoiser has been used over **3.0 million** times on [ModelScope](https://modelscope.cn/models/iic/speech_frcrn_ans_cirm_16k)
- [2024.11] Our MossFormer speech separator has been used over **2.5 million** times on [ModelScope](https://modelscope.cn/models/iic/speech_mossformer_separation_temporal_8k)
- [2024.11] Release of this repository
### 🌟 Why Choose ClearerVoice-Studio?
- **Pre-Trained Models:** Includes cutting-edge pre-trained models, fine-tuned on extensive, high-quality datasets. No need to start from scratch!
- **Ease of Use:** Designed for seamless integration with your projects, offering a simple yet flexible interface for inference and training.
- **Comprehensive Features:** Combines advanced algorithms for multiple speech processing tasks in one platform.
- **Community-Driven:** Built for researchers, developers, and enthusiasts to collaborate and innovate together.
## Contents of this repository
This repository is organized into three main components: **[ClearVoice](https://github.com/modelscope/ClearerVoice-Studio/tree/main/clearvoice)**, **[Train](https://github.com/modelscope/ClearerVoice-Studio/tree/main/train)**, and **[SpeechScore](https://github.com/modelscope/ClearerVoice-Studio/tree/main/speechscore)**.
### 1. **ClearVoice [[Readme](https://github.com/modelscope/ClearerVoice-Studio/blob/main/clearvoice/README.md)][[文档](https://github.com/modelscope/ClearerVoice-Studio/blob/main/clearvoice/README.md)]**
ClearVoice offers a user-friendly solution for speech processing tasks such as speech denoising, separation, super-resolution, audio-visual target speaker extraction, and more. It is designed as a unified inference platform leveraged pre-trained models (e.g., [FRCRN](https://arxiv.org/abs/2206.07293), [MossFormer](https://arxiv.org/abs/2302.11824)), all trained on extensive datasets. If you're looking for a tool to improve speech quality, ClearVoice is the perfect choice. Simply click on [`ClearVoice`](https://github.com/modelscope/ClearerVoice-Studio/tree/main/clearvoice) and follow our detailed instructions to get started.
### 2. **Train**
For advanced researchers and developers, we provide model finetune and training scripts for all the tasks offerred in ClearVoice and more:
- **Task 1: [Speech enhancement](train/speech_enhancement)** (16kHz & 48kHz)
- **Task 2: [Speech separation](train/speech_separation)** (8kHz & 16kHz)
- **Task 2: [Speech super-resolution](https://github.com/modelscope/ClearerVoice-Studio/tree/main/train/speech_super_resolution)** (48kHz) (comming soon)
- **Task 4: [Target speaker extraction](train/target_speaker_extraction)**
- **Sub-Task 1: Audio-only Speaker Extraction Conditioned on a Reference Speech** (8kHz)
- **Sub-Task 2: Audio-visual Speaker Extraction Conditioned on Face (Lip) Recording** (16kHz)
- **Sub-Task 3: Audio-visual Speaker Extraction Conditioned on Body Gestures** (16kHz)
- **Sub-Task 4: Neuro-steered Speaker Extraction Conditioned on EEG Signals** (16kHz)
Contributors are welcomed to include more model architectures and tasks!
### 3. **SpeechScore [[Readme](https://github.com/modelscope/ClearerVoice-Studio/blob/main/speechscore/README.md)][[文档](https://github.com/modelscope/ClearerVoice-Studio/blob/main/speechscore/README.md)]**
<a href="https://github.com/modelscope/ClearerVoice-Studio/tree/main/speechscore">`SpeechScore`<a/> is a speech quality assessment toolkit. We include it here to evaluate different model performance. SpeechScore includes many popular speech metrics:
- Signal-to-Noise Ratio (SNR)
- Perceptual Evaluation of Speech Quality (PESQ)
- Short-Time Objective Intelligibility (STOI)
- Deep Noise Suppression Mean Opinion Score (DNSMOS)
- Scale-Invariant Signal-to-Distortion Ratio (SI-SDR)
- and many more quality benchmarks
## Contact
If you have any comments or questions about ClearerVoice-Studio, feel free to raise an issue in this repository or contact us directly at:
- email: {shengkui.zhao, zexu.pan}@alibaba-inc.com
Alternatively, welcome to join our DingTalk and WeChat groups to share and discuss algorithms, technology, and user experience feedback. You may scan the following QR codes to join our official chat groups accordingly.
<p align="center">
<table>
<tr>
<td style="text-align:center;">
<a href="./asset/QR.jpg"><img alt="ClearVoice in DingTalk" src="https://img.shields.io/badge/ClearVoice-DingTalk-d9d9d9"></a>
</td>
<td style="text-align:center;">
<a href="./asset/QR.jpg"><img alt="ClearVoice in WeChat" src="https://img.shields.io/badge/ClearVoice-WeChat-d9d9d9"></a>
</td>
</tr>
<tr>
<td style="text-align:center;">
<img alt="Light" src="./asset/dingtalk.png" width="68%" />
<td style="text-align:center;">
<img alt="Light" src="./asset/qr.png" width="23%" />
</td>
</tr>
</table>
</p>
## Friend Links
Checkout some awesome Github repositories from Speech Lab of Institute for Intelligent Computing, Alibaba Group.
<p align="center">
<a href="https://github.com/FunAudioLLM/InspireMusic" target="_blank">
<img alt="Demo" src="https://img.shields.io/badge/Repo | Space-InspireMusic?labelColor=&label=InspireMusic&color=green"></a>
<a href="https://github.com/modelscope/FunASR" target="_blank">
<img alt="Github" src="https://img.shields.io/badge/Repo | Space-FunASR?labelColor=&label=FunASR&color=green"></a>
<a href="https://github.com/FunAudioLLM" target="_blank">
<img alt="Demo" src="https://img.shields.io/badge/Repo | Space-FunAudioLLM?labelColor=&label=FunAudioLLM&color=green"></a>
<a href="https://github.com/modelscope/3D-Speaker" target="_blank">
<img alt="Demo" src="https://img.shields.io/badge/Repo | Space-3DSpeaker?labelColor=&label=3D-Speaker&color=green"></a>
</p>
## Acknowledge
ClearerVoice-Studio contains third-party components and code modified from some open-source repos, including: <br>
[Speechbrain](https://github.com/speechbrain/speechbrain), [ESPnet](https://github.com/espnet), [TalkNet-ASD
](https://github.com/TaoRuijie/TalkNet-ASD) | {
"source": "modelscope/ClearerVoice-Studio",
"title": "README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 8571
} |
# ClearVoice
## 👉🏻[HuggingFace Space Demo](https://huggingface.co/spaces/alibabasglab/ClearVoice)👈🏻 | 👉🏻[ModelScope Space Demo](https://modelscope.cn/studios/iic/ClearerVoice-Studio)👈🏻
## Table of Contents
- [1. Introduction](#1-introduction)
- [2. Usage](#2-usage)
## 1. Introduction
ClearVoice offers a unified inference platform for `speech enhancement`, `speech separation`, and `audio-visual target speaker extraction`. It is designed to simplify the adoption of our pre-trained models for your speech processing purpose or the integration into your projects. Currently, we provide the following pre-trained models:
| Tasks (Sampling rate) | Models (HuggingFace Links)|
|-------|--------------------------|
|Speech Enhancement (16kHz & 48kHz)| `MossFormer2_SE_48K` ([link](https://huggingface.co/alibabasglab/MossFormer2_SE_48K)), `FRCRN_SE_16K` ([link](https://huggingface.co/alibabasglab/FRCRN_SE_16K)), `MossFormerGAN_SE_16K` ([link](https://huggingface.co/alibabasglab/MossFormerGAN_SE_16K)) |
|Speech Separation (16kHz)|`MossFormer2_SS_16K` ([link](https://huggingface.co/alibabasglab/MossFormer2_SS_16K))|
|Audio-Visual Target Speaker Extraction (16kHz)|`AV_MossFormer2_TSE_16K` ([link](https://huggingface.co/alibabasglab/AV_MossFormer2_TSE_16K))|
You don't need to manually download the pre-trained models—they are automatically fetched during inference.
## 2. Usage
### Step-by-Step Guide
If you haven't created a Conda environment for ClearerVoice-Studio yet, follow steps 1 and 2. Otherwise, skip directly to step 3.
1. **Clone the Repository**
``` sh
git clone https://github.com/modelscope/ClearerVoice-Studio.git
```
2. **Create Conda Environment**
``` sh
cd ClearerVoice-Studio
conda create -n ClearerVoice-Studio python=3.8
conda activate ClearerVoice-Studio
pip install -r requirements.txt
```
It should also work for python 3.9, 3.10 and 3.12!
> **Note:** 在ubuntu和windows安装过程中,如果遇到关于c++构建环境的前置安装以及pip setuptools wheel的工具更新问题,请自行手动安装解决 (感谢@RichardQin1)。
3. **Run Demo**
``` sh
cd clearvoice
python demo.py
```
or
``` sh
cd clearvoice
python demo_with_more_comments.py
```
- You may activate each demo case by setting to True in `demo.py` and `demo_with_more_comments.py`.
- Supported audio format: .flac .wav
- Supported video format: .avi .mp4 .mov .webm
4. **Use Scripts**
Use `MossFormer2_SE_48K` model for fullband (48kHz) speech enhancement task:
```python
from clearvoice import ClearVoice
myClearVoice = ClearVoice(task='speech_enhancement', model_names=['MossFormer2_SE_48K'])
#process single wave file
output_wav = myClearVoice(input_path='samples/input.wav', online_write=False)
myClearVoice.write(output_wav, output_path='samples/output_MossFormer2_SE_48K.wav')
#process wave directory
myClearVoice(input_path='samples/path_to_input_wavs', online_write=True, output_path='samples/path_to_output_wavs')
#process wave list file
myClearVoice(input_path='samples/scp/audio_samples.scp', online_write=True, output_path='samples/path_to_output_wavs_scp')
```
Parameter Description:
- `task`: Choose one of the three tasks `speech_enhancement`, `speech_separation`, and `target_speaker_extraction`
- `model_names`: List of model names, choose one or more models for the task
- `input_path`: Path to the input audio/video file, input audio/video directory, or a list file (.scp)
- `online_write`: Set to `True` to enable saving the enhanced/separated audio/video directly to local files during processing, otherwise, the enhanced/separated audio is returned. (Only supports `False` for `speech_enhancement`, `speech_separation` when processing single wave file`)
- `output_path`: Path to a file or a directory to save the enhanced/separated audio/video file
这里给出了一个较详细的中文使用教程:https://stable-learn.com/zh/clearvoice-studio-tutorial
## 3. Model Performance
**Speech enhancement models:**
We evaluated our released speech enhancement models on the popular benchmarks: [VoiceBank+DEMAND](https://paperswithcode.com/dataset/demand) testset (16kHz & 48kHz) and [DNS-Challenge-2020](https://paperswithcode.com/dataset/deep-noise-suppression-2020) (Interspeech) testset (non-reverb, 16kHz). Different from the most published papers that tailored each model for each test set, our evaluation here uses unified models on the two test sets. The evaluation metrics are generated by [SpeechScore](https://github.com/modelscope/ClearerVoice-Studio/tree/main/speechscore).
**VoiceBank+DEMAND testset (tested on 16kHz)**
|Model |PESQ |NB_PESQ |CBAK |COVL |CSIG |STOI |SISDR |SNR |SRMR |SSNR |P808_MOS|SIG |BAK |OVRL |ISR |SAR |SDR |FWSEGSNR |LLR |LSD |MCD|
|----- |--- |------- |---- |---- |---- |---- |----- |--- |---- |---- |------ |--- |--- |---- |--- |--- |--- |-------- |--- |--- |---|
|Noisy |1.97 | 3.32 |2.79 |2.70 |3.32 |0.92 |8.44 |9.35 |7.81 |6.13 |3.05 |3.37 |3.32 |2.79 |28.11 |8.53 |8.44 |14.77 |0.78 |1.40 |4.15|
|FRCRN_SE_16K |3.23 | 3.86 |3.47 |**3.83**|4.29 |0.95 |19.22 |19.16 |9.21 |7.60 |**3.59**|3.46 |**4.11**|3.20 |12.66 |21.16 |11.71 |**20.76**|0.37 |0.98 |**0.56**|
|MossFormerGAN_SE_16K|**3.47**|**3.96**|**3.50**|3.73 |**4.40**|**0.96**|**19.45**|**19.36**|9.07 |**9.09**|3.57 |**3.50**|4.09 |**3.23**|25.98 |21.18 |**19.42**|20.20 |**0.34**|**0.79**|0.70|
|MossFormer2_SE_48K |3.16 | 3.77 |3.32 |3.58 |4.14 |0.95 |19.38 |19.22 |**9.61**|6.86 |3.53 |**3.50**|4.07 |3.22 |**12.05**|**21.84**|11.47 |16.69 |0.57 |1.72 |0.62|
**DNS-Challenge-2020 testset (tested on 16kHz)**
|Model |PESQ |NB_PESQ |CBAK |COVL |CSIG |STOI |SISDR |SNR |SRMR |SSNR |P808_MOS|SIG |BAK |OVRL |ISR |SAR |SDR |FWSEGSNR |LLR |LSD |MCD|
|----- |--- |------- |---- |---- |---- |---- |----- |--- |---- |---- |------ |--- |--- |---- |--- |--- |--- |-------- |--- |--- |---|
|Noisy |1.58 | 2.16 |2.66 |2.06 |2.72 |0.91 |9.07 |9.95 |6.13 |9.35 |3.15 |3.39 |2.61 |2.48 |34.57 |9.09 |9.06 |15.87 |1.07 |1.88 |6.42|
|FRCRN_SE_16K |3.24 | 3.66 |3.76 |3.63 |4.31 |**0.98**|19.99 |19.89 |8.77 |7.60 |4.03 |3.58 |4.15 |3.33 |**8.90** |20.14 |7.93 |**22.59**|0.50 |1.69 |0.97|
|MossFormerGAN_SE_16K|**3.57**|**3.88**|**3.93**|**3.92**|**4.56**|**0.98**|**20.60**|**20.44**|8.68 |**14.03**|**4.05**|**3.58**|**4.18**|**3.36**|8.88 |**20.81**|**7.98** |21.62 |**0.45**|**1.65**|**0.89**|
|MossFormer2_SE_48K |2.94 | 3.45 |3.36 |2.94 |3.47 |0.97 |17.75 |17.65 |**9.26**|11.86 |3.92 |3.51 |4.13 |3.26 |8.55 |18.40 |7.48 |16.10 |0.98 |3.02 |1.15|
**VoiceBank+DEMAND testset (tested on 48kHz)** (We included our evaluations on other open-sourced models using SpeechScore)
|Model |PESQ |NB_PESQ |CBAK |COVL |CSIG |STOI |SISDR |SNR |SRMR |SSNR |P808_MOS|SIG |BAK |OVRL |ISR |SAR |SDR |FWSEGSNR |LLR |LSD |MCD|
|----- |--- |------- |---- |---- |---- |---- |----- |--- |---- |---- |------ |--- |--- |---- |--- |--- |--- |-------- |--- |--- |---|
|Noisy |1.97 | 2.87 |2.79 |2.70 |3.32 |0.92 |8.39 |9.30 |7.81 |6.13 |3.07 |3.35 |3.12 |2.69 |33.75 |8.42 |8.39 |13.98 |0.75 |1.45 |5.41|
|MossFormer2_SE_48K |**3.15**|**3.77**|**3.33**|**3.64**|**4.23**|**0.95**|**19.36**|**19.22**|9.61 |7.03 |**3.53**| 3.41 |**4.10**|**3.15**|**4.08**|**21.23** |4.06 |14.45 |NA |1.86 |**0.53**|
|Resemble_enhance |2.84 | 3.58 |3.14 |NA |NA |0.94 |12.42 |12.79 |9.08 |7.07 |**3.53**|**3.42**| 3.99 |3.12 |13.62 |12.66 |10.31 |14.56 |1.50 |1.66 | 1.54 |
|DeepFilterNet |3.03 | 3.71 |3.29 |3.55 |4.20 |0.94 |15.71 |15.66 |**9.66**|**7.19**|3.47 |3.40 |4.00 |3.10 |28.01 |16.20 |**15.79**|**15.69**|**0.55**|**0.94**| 1.77 |
- Resemble_enhance ([Github](https://github.com/resemble-ai/resemble-enhance)) is an open-sourced 44.1kHz pure speech enhancement platform from Resemble-AI since 2023, we resampled to 48khz before making evaluation.
- DeepFilterNet ([Github](https://github.com/Rikorose/DeepFilterNet)) is a low complexity speech enhancement framework for Full-Band audio (48kHz) using on deep filtering.
> **Note:** We observed anomalies in two speech metrics, LLR and LSD, after processing with the 48 kHz models. We will further investigate the issue to identify the cause.
**Speech separation models:**
We evaluated our speech separation model `MossFormer2_SS_16K` on the popular benchmark testset: LRS2_2Mix (16 kHz), WSJ0-2Mix (8 kHz), Libri2Mix (8 kHz), WHAM! (8 kHz). We compare our model with following state-of-the-art models: [Conv-TasNet](https://arxiv.org/abs/1809.07454), [DualPathRNN](https://arxiv.org/abs/1910.06379), [DPTNet](https://arxiv.org/abs/2007.13975), [SepFormer](https://arxiv.org/abs/2010.13154), [TDANet](https://openreview.net/pdf?id=fzberKYWKsI), [TF-GridNet](https://arxiv.org/abs/2209.03952), [SPMamba](https://arxiv.org/abs/2404.02063). The testing results are taken from [TDANet Github repo](https://github.com/JusperLee/TDANet) and [SPMamba GitHub repo](https://github.com/JusperLee/SPMamba). The performance metric of [SI-SNRi](https://arxiv.org/abs/1811.02508) (SI-SNR improvement) is used for the evaluations.
|Model |LRS2_2Mix (16 kHz)|WSJ0-2Mix (8 kHz)|Libri2Mix (8kHz)|WHAM! (8 kHz)|
|------|------------------|-----------------|----------------|-------------|
|Conv-TasNet |10.6|15.3|12.2|12.7|
|DualPathRNN|12.7|18.8|16.1|13.7|
|DPTNet |13.3|20.2|16.7|14.9|
|SepFormer |13.5|20.4|17.0|14.4|
|TDANet Large|14.2|18.5|17.4|15.2|
|TF-GridNet |-|**22.8**|19.8|16.9|
|SPMamba |-|22.5|**19.9**|**17.4**|
|MossFormer2_SS_16K|**15.5**|22.0|16.7|**17.4**|
> **Note:** The MossFormer2_SS_16K results presented are from our unified model, evaluated without retraining on individual datasets. This 16 kHz model was used for speech separation on the 16 kHz test set, with scores then calculated on the downsampled 8 kHz audio. All comparison models were trained and tested separately on each dataset.
**Speech super-resolution model:**
We demonstrated the effectiveness of our speech super-resolution model, `MossFormer2_SR_48K`, using the VoiceBank+DEMAND 48 kHz test set. For super-resolution evaluation, the test set was downsampled to 16 kHz, 24 kHz, and 32 kHz. The Log Spectral Distance (LSD) and PESQ metrics was used for evaluation. Recognizing that speech quality is impacted by both lower sampling rates and background noise, we also incorporated our speech enhancement model, `MossFormer2_SE_48K`, to reduce noise prior to super-resolution processing. Results are presented in the following table.
|Model | 16 kHz | 24 kHz | 32 kHz | 48 kHz |PESQ|
|------|--------|--------|--------|--------|-----|
|Origin|2.80 | 2.60 | 2.29 |1.46 |1.97|
|Enhanced|1.93 |1.52 | 1.50 |1.42 |3.15 |
For the 48 kHz case, speech super-resolution was not applied. The final two columns show that` MossFormer2_SE_48K` significantly improves the 16 kHz PESQ score but only marginally improves LSD. Therefore, LSD improvements at 16 kHz, 24 kHz, and 32 kHz are primarily attributed to `MossFormer2_SR_48K`. | {
"source": "modelscope/ClearerVoice-Studio",
"title": "clearvoice/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/clearvoice/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 11871
} |
# SpeechScore
## 👉🏻[HuggingFace Space Demo](https://huggingface.co/spaces/alibabasglab/SpeechScore)👈🏻
## Table of Contents
- [1. Introduction](#1-introduction)
- [2. Usage](#2-usage)
- [3. Acknowledgements](#3-acknowledgements)
## 1. Introduction
SpeechScore is a wrapper designed for assessing speech quality. It includes a collection of commonly used speech quality metrics, as listed below:
| Index | Metrics | Short Description | Externel Link |
|-------|---------|-------------|---------------|
|1.| BSSEval {ISR, SAR, SDR} | ISR (Source Image-to-Spatial distortion Ratio) measures preservation/distortion of target source. SDR (Source-to-Distortion Ratio) measures global quality. SAR (Source-to-Artefact Ratio) measures the presence of additional artificial noise|(See <a href="https://github.com/sigsep/sigsep-mus-eval">the official museval page</a>)|
|2.| {CBAK, COVL, CSIG} | CSIG predicts the signal distortion mean opinion score (MOS), CBAK measures background intrusiveness, and COVL measures speech quality. CSIG, CBAK, and COVL are ranged from 1 to 5| See paper: <a href="https://ecs.utdallas.edu/loizou/speech/obj_paper_jan08.pdf">Evaluation of Objective Quality Measures for Speech Enhancement</a>|
|3.| DNSMOS {BAK, OVRL, SIG, P808_MOS} |DNSMOS (Deep Noise Suppression Mean Opinion Score) measures the overall quality of the audio clip based on the ITU-T Rec. P.808 subjective evaluation. It outputs 4 scores: i) speech quality (SIG), ii) background noise quality (BAK), iii) the overall quality (OVRL), and iv) the P808_MOS of the audio. DNSMOS does not require clean references. | See paper: <a href="https://arxiv.org/pdf/2010.15258.pdf">Dnsmos: A non-intrusive perceptual objective speech quality metric to evaluate noise suppressors</a> and <a href="https://github.com/microsoft/DNS-Challenge/tree/master/DNSMOS">github page</a>|
|4.| FWSEGSNR | FWSEGSNR (Frequency-Weighted SEGmental SNR) is commonly used for evaluating dereverberation performance |See paper: <a href="https://ecs.utdallas.edu/loizou/speech/obj_paper_jan08.pdf">Evaluation of Objective Quality Measures for Speech Enhancement</a> |
|5.| LLR |LLR (Log Likelihood Ratio) measures how well an estimated speech signal matches the target (clean) signal in terms of their short-term spectral characteristics. |See paper: <a href="https://ecs.utdallas.edu/loizou/speech/obj_paper_jan08.pdf">Evaluation of Objective Quality Measures for Speech Enhancement</a> |
|6.| LSD | LSD (Log-Spectral Distance) measures the spectral differences between a clean reference signal and a processed speech signal.| See <a href="https://github.com/haoheliu/ssr_eval"> github page </a>|
|7.| MCD | MCD (Mel-Cepstral Distortion) measures the difference between the mel-cepstral coefficients (MCCs) of an estimated speech signal and the target (clean) speech signal. |See <a href="https://github.com/chenqi008/pymcd"> github page </a> |
|8.| NB_PESQ |NB-PESQ (NarrowBand Perceptual Evaluation of Speech Quality) meaures speech quality that reflects human auditory perception. It is defined in the ITU-T Recommendation P.862 and is developed for assessing narrowband speech codecs and enhancement algorithms. | See <a href="https://github.com/ludlows/PESQ"> github page </a> |
|9.| PESQ | PESQ (Perceptual Evaluation of Speech Quality) assesses the quality of speech signals to mimic human perception. It is standardized by the International Telecommunication Union (ITU-T P.862) and is widely used in evaluating telecommunication systems and speech enhancement algorithms. |See <a href="https://github.com/ludlows/PESQ"> github page </a> |
|10.| SISDR |SI-SDR (Scale-Invariant Signal-to-Distortion Ratio) quantifies the ratio between the power of the target signal component and the residual distortion. It measures how well an estimated speech signal matches the target (clean) speech signal, while being invariant to differences in scale. |See paper: <a href="https://arxiv.org/abs/1811.02508">SDR - half-baked or well done?<a/> |
|11.| SNR | SNR (Signal-to-Noise Ratio) is a fundamental metric used in speech quality measurement to evaluate the relative level of the desired speech signal compared to unwanted noise. It quantifies the clarity and intelligibility of speech in decibels (dB).| See paper: <a href="https://www.isca-archive.org/icslp_1998/hansen98_icslp.pdf">An effective quality evaluation protocol for speech enhancement algorithms<a/>|
|12.| SRMR |SRMR (Speech-to-Reverberation Modulation Energy Ratio) evaluates the ratio of speech-dominant modulation energy to reverberation-dominant modulation energy. It quantifies the impact of reverberation on the quality and intelligibility of speech signals. SRMR does not require clean references. | See <a href="https://github.com/jfsantos/SRMRpy">SRMRpy<a/> and <a href="https://github.com/MuSAELab/SRMRToolbox">SRMR Toolbox<a/>|
|13.| SSNR |SSNR (Segmental Signal-to-Noise Ratio) is an extension of SNR (Signal-to-Noise Ratio) and for evaluating the quality of speech signals in shorter segments or frames. It is calculated by dividing the power of the clean speech signal by the power of the noise signal, computed over small segments of the speech signal. | See paper: <a href="https://www.isca-archive.org/icslp_1998/hansen98_icslp.pdf">An effective quality evaluation protocol for speech enhancement algorithms<a/>|
|14.| STOI|STOI (Short-Time Objective Intelligibility Index) measures speech quality and intelligibility by operateing on short-time segments of the speech signal and computes a score between 0 and 1. | See <a href="https://github.com/mpariente/pystoi">github page <a/> |
## 2. Usage
### Step-by-Step Guide
If you haven't created a Conda environment for ClearerVoice-Studio yet, follow steps 1 and 2. Otherwise, skip directly to step 3.
1. **Clone the Repository**
``` sh
git clone https://github.com/modelscope/ClearerVoice-Studio.git
```
2. **Create Conda Environment**
``` sh
cd ClearerVoice-Studio
conda create -n ClearerVoice-Studio python=3.8
conda activate ClearerVoice-Studio
pip install -r requirements.txt
```
3. Run demo script
``` sh
cd speechscore
python demo.py
```
or use the following script:
``` python
# Import pprint for pretty-printing the results in a more readable format
import pprint
# Import the SpeechScore class to evaluate speech quality metrics
from speechscore import SpeechScore
# Main block to ensure the code runs only when executed directly
if __name__ == '__main__':
# Initialize a SpeechScore object with a list of score metrics to be evaluated
# Supports any subsets of the list
mySpeechScore = SpeechScore([
'SRMR', 'PESQ', 'NB_PESQ', 'STOI', 'SISDR',
'FWSEGSNR', 'LSD', 'BSSEval', 'DNSMOS',
'SNR', 'SSNR', 'LLR', 'CSIG', 'CBAK',
'COVL', 'MCD'
])
# Call the SpeechScore object to evaluate the speech metrics between 'noisy' and 'clean' audio
# Arguments:
# - {test_path, reference_path} supports audio directories or audio paths (.wav or .flac)
# - window (float): seconds, set None to specify no windowing (process the full audio)
# - score_rate (int): specifies the sampling rate at which the metrics should be computed
# - return_mean (bool): set True to specify that the mean score for each metric should be returned
print('score for a signle wav file')
scores = mySpeechScore(test_path='audios/noisy.wav', reference_path='audios/clean.wav', window=None, score_rate=16000, return_mean=False)
# Pretty-print the resulting scores in a readable format
pprint.pprint(scores)
print('score for wav directories')
scores = mySpeechScore(test_path='audios/noisy/', reference_path='audios/clean/', window=None, score_rate=16000, return_mean=True)
# Pretty-print the resulting scores in a readable format
pprint.pprint(scores)
# Print only the resulting mean scores in a readable format
#pprint.pprint(scores['Mean_Score'])
```
The results should be looking like below:
```sh
score for a signle wav file
{'BSSEval': {'ISR': 22.74466768594831,
'SAR': -0.1921607960486258,
'SDR': -0.23921670199308115},
'CBAK': 1.5908301020179343,
'COVL': 1.5702204013203889,
'CSIG': 2.3259366746377066,
'DNSMOS': {'BAK': 1.3532928733331306,
'OVRL': 1.3714771994335782,
'P808_MOS': 2.354834,
'SIG': 1.8698058813241407},
'FWSEGSNR': 6.414399025759913,
'LLR': 0.85330075,
'LSD': 2.136734818644327,
'MCD': 11.013451521306235,
'NB_PESQ': 1.2447538375854492,
'PESQ': 1.0545592308044434,
'SISDR': -0.23707451176264824,
'SNR': -0.9504614142497447,
'SRMR': 6.202590182397157,
'SSNR': -0.6363067113236048,
'STOI': 0.8003376411051097}
score for wav directories
{'Mean_Score': {'BSSEval': {'ISR': 23.728811184378372,
'SAR': 4.839625092004951,
'SDR': 4.9270216975279135},
'CBAK': 1.9391528046230797,
'COVL': 1.5400270840455588,
'CSIG': 2.1286157747587344,
'DNSMOS': {'BAK': 1.9004402577440938,
'OVRL': 1.860621534493506,
'P808_MOS': 2.5821499824523926,
'SIG': 2.679913397827385},
'FWSEGSNR': 9.079539440199582,
'LLR': 1.1992616951465607,
'LSD': 2.0045290996104748,
'MCD': 8.916492705343465,
'NB_PESQ': 1.431145429611206,
'PESQ': 1.141619324684143,
'SISDR': 4.778657656271212,
'SNR': 4.571920494312266,
'SRMR': 9.221118316293268,
'SSNR': 2.9965604574762796,
'STOI': 0.8585249663711918},
'audio_1.wav': {'BSSEval': {'ISR': 22.74466768594831,
'SAR': -0.1921607960486258,
'SDR': -0.23921670199308115},
'CBAK': 1.5908301020179343,
'COVL': 1.5702204013203889,
'CSIG': 2.3259366746377066,
'DNSMOS': {'BAK': 1.3532928733331306,
'OVRL': 1.3714771994335782,
'P808_MOS': 2.354834,
'SIG': 1.8698058813241407},
'FWSEGSNR': 6.414399025759913,
'LLR': 0.85330075,
'LSD': 2.136734818644327,
'MCD': 11.013451521306235,
'NB_PESQ': 1.2447538375854492,
'PESQ': 1.0545592308044434,
'SISDR': -0.23707451176264824,
'SNR': -0.9504614142497447,
'SRMR': 6.202590182397157,
'SSNR': -0.6363067113236048,
'STOI': 0.8003376411051097},
'audio_2.wav': {'BSSEval': {'ISR': 24.712954682808437,
'SAR': 9.871410980058528,
'SDR': 10.093260097048908},
'CBAK': 2.287475507228225,
'COVL': 1.509833766770729,
'CSIG': 1.9312948748797627,
'DNSMOS': {'BAK': 2.4475876421550566,
'OVRL': 2.349765869553434,
'P808_MOS': 2.809466,
'SIG': 3.490020914330629},
'FWSEGSNR': 11.744679854639253,
'LLR': 1.5452226,
'LSD': 1.8723233805766222,
'MCD': 6.819533889380694,
'NB_PESQ': 1.617537021636963,
'PESQ': 1.2286794185638428,
'SISDR': 9.794389824305073,
'SNR': 10.094302402874277,
'SRMR': 12.23964645018938,
'SSNR': 6.629427626276164,
'STOI': 0.9167122916372739}}
```
Any subset of the full score list is supported, specify your score list using the following objective:
```
mySpeechScore = SpeechScore(['.'])
```
## 3. Acknowledgements
We referred to <a href="https://github.com/aliutkus/speechmetrics">speechmetrics<a/>, <a href="https://github.com/microsoft/DNS-Challenge/tree/master/DNSMOS">DNSMOS <a/>, <a href="https://github.com/sigsep/bsseval/tree/master">BSSEval<a/>, <a href="https://github.com/chenqi008/pymcd/blob/main/pymcd/mcd.py">pymcd<a/>, <a href="https://github.com/mpariente/pystoi">pystoi<a/>, <a href="https://github.com/ludlows/PESQ">PESQ<a/>, and <a href="https://github.com/santi-pdp/segan_pytorch/tree/master">segan_pytorch<a/> for implementing this repository. | {
"source": "modelscope/ClearerVoice-Studio",
"title": "speechscore/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/speechscore/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 12478
} |
# ClearerVoice-Studio: Train Speech Enhancement Models
## 1. Introduction
This repository provides training scripts for speech enhancement models. Currently, it supports fresh train or finetune for the following models:
|model name| sampling rate | Paper Link|
|----------|---------------|------------|
|FRCRN_SE_16K|16000 | FRCRN ([Paper](https://arxiv.org/abs/2206.07293), ICASSP 2022) |
|MossFormerGAN_SE_16K|16000| MossFormer2 Backbone + GAN ([Paper](https://arxiv.org/abs/2312.11825), ICASSP 2024)|
|MossFormer2_SE_48K |48000| MossFormer2 Backbone + Masking ([Paper](https://arxiv.org/abs/2312.11825), ICASSP 2024)|
1. **FRCRN_SE_16K**
FRCRN uses a complex network for single-channel speech enhancement. It is a generalized method for enhancing speech in various noise environments. Our trained FRCRN model has won good performance in IEEE ICASSP 2022 DNS Challenge. Please check our [paper](https://arxiv.org/abs/2206.07293).
The FRCRN model is developed based on a new framework of **Convolutional Recurrent Encoder-Decoder (CRED)**, which is built on the Convolutional Encoder-Decoder (CED) architecture. CRED can significantly improve the performance of the convolution kernel by improving the limited receptive fields in the convolutions of CED using frequency recurrent layers. In addition, we introduce the Complex Feedforward Sequential Memory Network (CFSMN) to reduce the complexity of the recurrent network, and apply complex-valued network operations to realize the full complex deep model, which not only constructs long sequence speech more effectively, but also can enhance the amplitude and phase of speech at the same time.

2. **MossFormerGAN_SE_16K**
MossFormerGAN is motivated from [CMGAN](https://arxiv.org/abs/2203.15149) and [TF-GridNet](https://arxiv.org/abs/2209.03952). We use an extended MossFormer2 backbone (See below figure) to replace Conformer in CMGAN and add the Full-band Self-attention Modul proposed in TF-GridNet. The whole speech enhancemnt network is optimized by the adversarial training scheme as described in CMGAN. We extended the CNN network to an attention-based network for the discriminator. MossFormerGAN is trained for 16kHz speech enhancement.

3. **MossFormer2_SE_48K**
`MossFormer2_SE_48K` is a full-band (48kHz) speech enhancement model. Full-band 48 kHz speech enhancement is becoming increasingly important due to advancements in communication platforms and high-quality media consumption. Several open-sourced github repos such as [FullSubNet](https://github.com/Audio-WestlakeU/FullSubNet), [DeepFilterNet](https://github.com/Rikorose/DeepFilterNet), and [resemble-enhance](https://github.com/resemble-ai/resemble-enhance) have released pre-trained models. We provide a more competitive `MossFormer2_SE_48K` model in our [ClearVoice](https://github.com/modelscope/ClearerVoice-Studio/tree/main/clearvoice) and the training and finetune scripts here.
`MossFormer2_SE_48K` uses the following model architecture. It uses noisy fbank as input to predict the [Phase-Sensitive Mask (PSM)](https://www.jonathanleroux.org/pdf/Erdogan2015ICASSP04.pdf). Then, the predicted mask is applied to the noisy STFT spectrogram. Finally, the estimated STFT spectrogram is converted back to waveform by IFFT. The main component is the MossFormer2 block which consists of a MossFormer module and a Recurrent model. The number of MossFormer2 blocks can be adjusted to deepen the network. We used 24 MossFormer2 blocks in `MossFormer2_SE_48K`.

We provided performance comparisons of our released models with the publically available models in [ClearVoice](https://github.com/modelscope/ClearerVoice-Studio/tree/main/clearvoice) page.
## 2. Usage
### Step-by-Step Guide
If you haven't created a Conda environment for ClearerVoice-Studio yet, follow steps 1 and 2. Otherwise, skip directly to step 3.
1. **Clone the Repository**
``` sh
git clone https://github.com/modelscope/ClearerVoice-Studio.git
```
2. **Create Conda Environment**
``` sh
cd ClearerVoice-Studio
conda create -n ClearerVoice-Studio python=3.8
conda activate ClearerVoice-Studio
pip install -r requirements.txt
```
Notice: Other higher python versions such as python 3.12.1 should also be supported.
3. **Prepare Dataset**
If you don't have any training dataset to start with, we recommend you to download the VoiceBank-DEMAND dataset ([link](https://datashare.ed.ac.uk/handle/10283/2826)]. You may store the dataset anywhere. What you need to start the model training is to create two scp files as shown in `data/tr_demand_28_spks_16k.scp` and `data/cv_demand_testset_16k.scp`. `data/tr_demand_28_spks_16k.scp` contains the training data list and `data/cv_demand_testset_16k.scp` contains the testing data list.
Replace `data/tr_demand_28_spks_16k.scp` and `data/cv_demand_testset_16k.scp` with your new .scp files in `config/train/*.yaml`. Now it is ready to train the models.
4. **Start Training**
``` sh
bash train.sh
```
You may need to set the correct network in `train.sh` and choose either a fresh training or a finetune process using:
```
network=MossFormer2_SE_48K #Train MossFormer2_SE_48K model
train_from_last_checkpoint=1 #Set 1 to start training from the last checkpoint if exists,
init_checkpoint_path=./ #Path to your initial model if starting fine-tuning; otherwise, set it to 'None'
``` | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/speech_enhancement/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/speech_enhancement/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 5719
} |
# ClearerVoice-Studio: Train Speech Separation Models
## 1. Introduction
This repository provides a flexible training or finetune scripts for speech separation models. Currently, it supports both 8kHz and 16kHz sampling rates:
|model name| sampling rate | Paper Link|
|----------|---------------|------------|
|MossFormer2_SS_8K |8000| MossFormer2 ([Paper](https://arxiv.org/abs/2312.11825), ICASSP 2024)|
|MossFormer2_SS_16K |16000| MossFormer2 ([Paper](https://arxiv.org/abs/2312.11825), ICASSP 2024)|
MossFormer2 has achieved state-of-the-art speech sesparation performance upon the paper published in ICASSP 2024. It is a hybrid model by integrating a recurrent module into
our previous [MossFormer](https://arxiv.org/abs/2302.11824) framework. MossFormer2 is capable to model not only long-range and coarse-scale dependencies but also fine-scale recurrent patterns. For efficient self-attention across the extensive sequence, MossFormer2 adopts the joint local-global self-attention strategy as proposed for MossFormer. MossFormer2 introduces a dedicated recurrent module to model intricate temporal dependencies within speech signals.

Instead of applying the recurrent neural networks (RNNs) that use traditional recurrent connections, we present a recurrent module based on a feedforward sequential memory network (FSMN), which is considered "RNN-free" recurrent network due to the ability to capture recurrent patterns without using recurrent connections. Our recurrent module mainly comprises an enhanced dilated FSMN block by using gated convolutional units (GCU) and dense connections. In addition, a bottleneck layer and an output layer are also added for controlling information flow. The recurrent module relies on linear projections and convolutions for seamless, parallel processing of the entire sequence.

MossFormer2 demonstrates remarkable performance in WSJ0-2/3mix, Libri2Mix, and WHAM!/WHAMR! benchmarks. Please refer to our [Paper](https://arxiv.org/abs/2312.11825) or the individual models using the standalone script ([link](https://github.com/alibabasglab/MossFormer2/tree/main/MossFormer2_standalone)).
We will provide performance comparisons of our released models with the publically available models in [ClearVoice](https://github.com/modelscope/ClearerVoice-Studio/tree/main/clearvoice) page.
## 2. Usage
### Step-by-Step Guide
If you haven't created a Conda environment for ClearerVoice-Studio yet, follow steps 1 and 2. Otherwise, skip directly to step 3.
1. **Clone the Repository**
``` sh
git clone https://github.com/modelscope/ClearerVoice-Studio.git
```
2. **Create Conda Environment**
``` sh
cd ClearerVoice-Studio
conda create -n ClearerVoice-Studio python=3.8
conda activate ClearerVoice-Studio
pip install -r requirements.txt
```
3. **Prepare Dataset**
a. Use a pre-prepared toy [MiniLibriMix dataset](https://zenodo.org/records/3871592). It contains a train set of 800 mixtures and a validation set of 200 mixtures.
b. Create your own dataset
- WSJ0-2Mix dataset preparation: We assume you have purchased [WSJ0 speech dataset](https://catalog.ldc.upenn.edu/LDC93S6A)
- Step 1: Download [WHAM! noise dataset](https://my-bucket-a8b4b49c25c811ee9a7e8bba05fa24c7.s3.amazonaws.com/wham_noise.zip). Go to [this page](http://wham.whisper.ai/) for more information.
- Step 2: Use the mixture generation scripts in [python format](https://github.com/mpariente/pywsj0-mix) or [matlab format](https://www.merl.com/research/highlights/deep-clustering/) to generate mixture datasets. Use the sampling rate either 8000Hz or 16000Hz.
- Step 3: Create scp files as formatted in `data/tr_wsj0_2mix_16k.scp` for train, validation, and test.
- Step 4: Replace the `tr_list` and `cv_list` paths for scp files in `config/train/MossFormer2_SS_16K.yaml`
- LibriMix dataset preparation: If you don't have WSJ0 dataset, we suggest you to download [LibriSpeech dataset](https://www.openslr.org/12) (only 'train-clean-360.tar.gz' is required) and use the following steps to create LibriMix dataset.
- Step 1. Download [WHAM! noise dataset](https://my-bucket-a8b4b49c25c811ee9a7e8bba05fa24c7.s3.amazonaws.com/wham_noise.zip). Go to [this page](http://wham.whisper.ai/) for more information.
- Step 2. Clone the [repo](https://github.com/JorisCos/LibriMix) and run the main script : [generate_librimix.sh](https://github.com/JorisCos/LibriMix/blob/master/generate_librimix.sh)
```sh
git clone https://github.com/JorisCos/LibriMix
cd LibriMix
./generate_librimix.sh storage_dir
```sh
- Step 3: Create scp files as formatted in `data/tr_wsj0_2mix_16k.scp` for train, validation, and test.
- Step 4: Replace the `tr_list` and `cv_list` paths for scp files in `config/train/MossFormer2_SS_16K.yaml`
4. **Start Training**
``` sh
bash train.sh
```
You may need to set the correct network in `train.sh` and choose either a fresh training or a finetune process using:
```
network=MossFormer2_SS_16K #Train MossFormer2_SS_16K model
train_from_last_checkpoint=1 #Set 1 to start training from the last checkpoint if exists,
init_checkpoint_path=./ #Path to your initial model if starting fine-tuning; otherwise, set it to 'None'
``` | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/speech_separation/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/speech_separation/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 5470
} |
# ClearerVoice-Studio: Train Speech Super-Resolution Models
## 1. Introduction
This repository provides a flexible training and finetune scripts for optimizing speech super-resolution models. One or multiple models can be trained to scale multiple lower sampling rates (>= 8kHz) to 48kHz sampling rate | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/speech_super_resolution/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/speech_super_resolution/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 303
} |
# ClearerVoice-Studio: Target Speaker Extraction Algorithms
## Table of Contents
- [1. Introduction](#1-introduction)
- [2. Usage](#2-usage)
- [3. Task: Audio-only Speaker Extraction Conditioned on a Reference Speech](#3-audio-only-speaker-extraction-conditioned-on-a-reference-speech)
- [4. Task: Audio-visual Speaker Extraction Conditioned on Face (Lip) Recording](#4-audio-visual-speaker-extraction-conditioned-on-face-or-lip-recording)
- [5. Task: Audio-visual Speaker Extraction Conditioned on Body Gestures](#5-audio-visual-speaker-extraction-conditioned-on-body-gestures)
- [6. Task: Neuro-steered Speaker Extraction Conditioned on EEG Signals](#6-neuro-steered-speaker-extraction-conditioned-on-eeg-signals)
## 1. Introduction
This repository provides training scripts for various target speaker extraction algorithms, including audio-only, audio-visual, and neuro-steered speaker extraction.
## 2. Usage
### Step-by-Step Guide
1. **Clone the Repository**
``` sh
git clone https://github.com/modelscope/ClearerVoice-Studio.git
```
2. **Create Conda Environment**
``` sh
cd ClearerVoice-Studio/train/target_speaker_extraction/
conda create -n clear_voice_tse python=3.9
conda activate clear_voice_tse
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=11.8 -c pytorch -c nvidia
pip install -r requirements.txt
```
3. **Download Dataset**
> Follow the download links or preprocessing scripts provided under each task section.
4. **Modify Dataset Paths**
> Update the paths to your datasets in the configuration files. For example, modify the "audio_direc" and "ref_direc" in "config/config_YGD_gesture_seg_2spk.yaml"
5. **Modify Train Configuration**
> Adjust the settings in the "train.sh" file. For example, set "n_gpu=1" for single-GPU training, or "n_gpu=2" for two-GPU distributed training
6. **Start Training**
``` sh
bash train.sh
```
7. **Visualize Training Progress using Tensorboard**
``` sh
tensorboard --logdir ./checkpoints/
```
8. **Optionally Evaluate Checkpoints**
``` sh
bash evaluate_only.sh
```
## 3. Audio-only speaker extraction conditioned on a reference speech
### Support datasets for training:
* WSJ0-2mix [[Download](https://github.com/gemengtju/Tutorial_Separation/blob/master/generation/wsj0-2mix/create-speaker-mixtures.zip)]
### Support models for training:
* SpEx+ (Non-causal) [[Paper: SpEx+: A Complete Time Domain Speaker Extraction Network](https://arxiv.org/abs/2005.04686)]
### Non-causal (Offline) WSJ0-2mix benchmark:
| Dataset | Speakers | Model| Config | Checkpoint | SI-SDRi (dB) | SDRi (dB) |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| WSJ0-2mix | 2-mix | SpEx+ | [Paper](https://arxiv.org/abs/2005.04686) | - | 16.9 | 17.2 |
| WSJ0-2mix | 2-mix | SpEx+ | [This repo](./config/config_wsj0-2mix_speech_SpEx-plus_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_wsj0-2mix_speech_SpEx-plus_2spk/) | 17.1 | 17.5 |
## 4. Audio-visual speaker extraction conditioned on face or lip recording
### Support datasets for training:
* VoxCeleb2 [[Download](https://huggingface.co/datasets/alibabasglab/VoxCeleb2-mix)]
* LRS2 [[Download](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html)]
### Support models for training:
* AV-ConvTasNet (Causal/Non-causal) [[Paper: Time Domain Audio Visual Speech Separation](https://arxiv.org/abs/1904.03760)]
* AV-DPRNN (aka USEV) (Non-causal) [[Paper: Universal Speaker Extraction With Visual Cue](https://ieeexplore.ieee.org/document/9887809)]
* AV-TFGridNet (Non-causal) [[Paper: Scenario-Aware Audio-Visual TF-GridNet for Target Speech Extraction](https://arxiv.org/abs/2310.19644)]
* AV-Mossformer2 (Non-causal) [Paper: ClearVoice]
### Non-causal (Offline) VoxCeleb2-mix benchmark:
Dataset | Speakers | Model| Config | Checkpoint | SI-SDRi (dB) | SDRi (dB)
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| VoxCeleb2 | 2-mix | [AV-ConvTasNet](https://arxiv.org/abs/1904.03760) | [Paper](https://arxiv.org/abs/1904.03760) | - | 10.6 | 10.9
| VoxCeleb2 | 2-mix | [MuSE](https://arxiv.org/abs/2010.07775) | [Paper](https://arxiv.org/abs/2010.07775) | - | 11.7 | 12.0
| VoxCeleb2 | 2-mix | [reentry](https://ieeexplore.ieee.org/document/9721129) | [Paper](https://ieeexplore.ieee.org/document/9721129) | - | 12.6 | 12.9
| VoxCeleb2 | 2-mix | [AV-DPRNN](https://ieeexplore.ieee.org/document/9887809) | [This repo](./config/config_VoxCeleb2_lip_dprnn_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_VoxCeleb2_lip_dprnn_2spk/)| 11.5 | 11.8
| VoxCeleb2 | 2-mix | [AV-TFGridNet](https://arxiv.org/abs/2310.19644) | [This repo](./config/config_VoxCeleb2_lip_tfgridnet_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_VoxCeleb2_lip_tfgridnet_2spk/)| 13.7 | 14.1
| VoxCeleb2 | 2-mix | AV-Mossformer2| [This repo](./config/config_VoxCeleb2_lip_mossformer2_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_VoxCeleb2_lip_mossformer2_2spk/)| 14.6 | 14.9
| VoxCeleb2 | 3-mix | [AV-ConvTasNet](https://arxiv.org/abs/1904.03760) | [Paper](https://arxiv.org/abs/1904.03760) | - | 9.8 | 10.2
| VoxCeleb2 | 3-mix | [MuSE](https://arxiv.org/abs/2010.07775) | [Paper](https://arxiv.org/abs/2010.07775) | - | 11.6 | 12.2
| VoxCeleb2 | 3-mix | [reentry](https://ieeexplore.ieee.org/document/9721129) | [Paper](https://ieeexplore.ieee.org/document/9721129) | - | 12.6 | 13.1
| VoxCeleb2 | 3-mix | [AV-DPRNN](https://ieeexplore.ieee.org/document/9887809) | [This repo](./config/config_VoxCeleb2_lip_dprnn_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_VoxCeleb2_lip_dprnn_3spk/)| 10.5 | 11.0
| VoxCeleb2 | 3-mix | [AV-TFGridNet](https://arxiv.org/abs/2310.19644) | [This repo](./config/config_VoxCeleb2_lip_tfgridnet_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_VoxCeleb2_lip_tfgridnet_3spk/)| 14.2 | 14.6
| VoxCeleb2 | 3-mix | AV-Mossformer2| [This repo](./config/config_VoxCeleb2_lip_mossformer2_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_VoxCeleb2_lip_mossformer2_3spk/)| 15.5 | 16.0
### Non-causal (Offline) LRS2-mix benchmark:
Dataset | Speakers | Model| Config | Checkpoint | SI-SDRi (dB) | SDRi (dB)
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| LRS2 | 2-mix | [AV-ConvTasNet](https://arxiv.org/abs/1904.03760) | [This repo](./config/config_LRS2_lip_convtasnet_2spk.yaml)| [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_convtasnet_2spk/) | 11.6 | 11.9
| LRS2 | 2-mix | [AV-DPRNN](https://ieeexplore.ieee.org/document/9887809) | [This repo](./config/config_LRS2_lip_dprnn_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_dprnn_2spk/) | 12.0 | 12.4
| LRS2 | 2-mix | [AV-TFGridNet](https://arxiv.org/abs/2310.19644) | [This repo](./config/config_LRS2_lip_tfgridnet_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_tfgridnet_2spk/)| 15.1 | 15.4
| LRS2 | 2-mix | AV-Mossformer2| [This repo](./config/config_LRS2_lip_mossformer2_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_mossformer2_2spk/)| 15.5 | 15.8
| LRS2 | 3-mix | [AV-ConvTasNet](https://arxiv.org/abs/1904.03760) | [This repo](./config/config_LRS2_lip_convtasnet_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_convtasnet_3spk/)| 10.8 | 11.3
| LRS2 | 3-mix | [AV-DPRNN](https://ieeexplore.ieee.org/document/9887809) | [This repo](./config/config_LRS2_lip_dprnn_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_dprnn_3spk/)| 10.6 | 11.1
| LRS2 | 3-mix | [AV-TFGridNet](https://arxiv.org/abs/2310.19644) | [This repo](./config/config_LRS2_lip_tfgridnet_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_tfgridnet_3spk/)| 15.0 | 15.4
| LRS2 | 3-mix | AV-Mossformer2 | [This repo](./config/config_LRS2_lip_mossformer2_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_LRS2_lip_mossformer2_3spk/)| 16.2 | 16.6
## 5. Audio-visual speaker extraction conditioned on body gestures
### Support datasets for training:
* YGD [[Download](https://huggingface.co/datasets/alibabasglab/YGD-mix)] [[Paper: Robots Learn Social Skills: End-to-End Learning of Co-Speech Gesture Generation for Humanoid Robots](https://arxiv.org/abs/1810.12541)]
### Support models for training:
* SEG (Non-causal) [[Paper: Speaker Extraction with Co-Speech Gestures Cue](https://ieeexplore.ieee.org/document/9774925)]
### Non-causal (Offline) YGD-mix benchmark:
Dataset | Speakers | Model| Config | Checkpoint | SI-SDRi (dB) | SDRi (dB)
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| YGD | 2-mix | [DPRNN-GSR](https://ieeexplore.ieee.org/document/9774925) | [Paper](https://ieeexplore.ieee.org/document/9774925) | - | 6.2 | 8.1
| YGD | 2-mix | [SEG](https://ieeexplore.ieee.org/document/9774925) | [Paper](https://ieeexplore.ieee.org/document/9774925) | - | 9.1 | 10.0
| YGD | 2-mix | [SEG](https://ieeexplore.ieee.org/document/9774925) | [This repo](./config/config_YGD_gesture_seg_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_YGD_gesture_seg_2spk/)| 9.5 | 10.4
| YGD | 3-mix | [DPRNN-GSR](https://ieeexplore.ieee.org/document/9774925) | [Paper](https://ieeexplore.ieee.org/document/9774925) | - | 1.8 | 3.5
| YGD | 3-mix | [SEG](https://ieeexplore.ieee.org/document/9774925) | [Paper](https://ieeexplore.ieee.org/document/9774925) | - | 5.0 | 5.3
| YGD | 3-mix | [SEG](https://ieeexplore.ieee.org/document/9774925) | [This repo](./config/config_YGD_gesture_seg_3spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_YGD_gesture_seg_3spk/)| 4.9 | 5.6
## 6. Neuro-steered speaker extraction conditioned on EEG signals
### Support datasets for training:
* KUL [[Download](https://huggingface.co/datasets/alibabasglab/KUL-mix)] [[Paper: Auditory-Inspired Speech Envelope Extraction Methods for Improved EEG-Based Auditory Attention Detection in a Cocktail Party Scenario](https://ieeexplore.ieee.org/document/7478117?signout=success)]
### Support models for training:
* NeuroHeed (Non-causal) [[Paper: Neuro-Steered Speaker Extraction Using EEG Signals](https://ieeexplore.ieee.org/document/10683957)]
### Non-causal (Offline) KUL-mix benchmark:
Dataset | Speakers | Model | Config | Checkpoint | SI-SDRi (dB) | SDRi (dB)
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| KUL | 2-mix | [NeuroHeed](https://ieeexplore.ieee.org/document/10683957) | [Paper](https://ieeexplore.ieee.org/document/10683957) | - | 14.3 | 15.5
| KUL | 2-mix | [NeuroHeed](https://ieeexplore.ieee.org/document/10683957) | [This repo](./config/config_KUL_eeg_neuroheed_2spk.yaml) | [This repo](https://huggingface.co/alibabasglab/log_KUL_eeg_neuroheed_2spk/)| 13.4 | 15.0
### Causal (online) KUL-mix benchmark:
Dataset | Speakers | Model | Config | Checkpoint | SI-SDRi (dB) | SDRi (dB)
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| KUL | 2-mix | [NeuroHeed](https://ieeexplore.ieee.org/document/10683957) | [Paper](https://ieeexplore.ieee.org/document/10683957) | - | 11.2 | 11.8 | {
"source": "modelscope/ClearerVoice-Studio",
"title": "train/target_speaker_extraction/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/train/target_speaker_extraction/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 11001
} |
The SRMRpy toolbox is licensed under the MIT license.
> Copyright (c) 2014 João F. Santos, Tiago H. Falk
>
> Permission is hereby granted, free of charge, to any person obtaining a copy
> of this software and associated documentation files (the "Software"), to deal
> in the Software without restriction, including without limitation the rights
> to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> copies of the Software, and to permit persons to whom the Software is
> furnished to do so, subject to the following conditions:
>
> The above copyright notice and this permission notice shall be included in all
> copies or substantial portions of the Software.
>
> THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
> SOFTWARE. | {
"source": "modelscope/ClearerVoice-Studio",
"title": "speechscore/scores/srmr/LICENSE.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/speechscore/scores/srmr/LICENSE.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 1166
} |
# Face detector
This face detector is adapted from `https://github.com/cs-giung/face-detection-pytorch`. | {
"source": "modelscope/ClearerVoice-Studio",
"title": "clearvoice/models/av_mossformer2_tse/faceDetector/README.md",
"url": "https://github.com/modelscope/ClearerVoice-Studio/blob/main/clearvoice/models/av_mossformer2_tse/faceDetector/README.md",
"date": "2024-11-12T07:26:34",
"stars": 2289,
"description": "An AI-Powered Speech Processing Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Enhancement, Separation, and Target Speaker Extraction, etc.",
"file_size": 105
} |
# OpenDeepResearcher
This notebook implements an **AI researcher** that continuously searches for information based on a user query until the system is confident that it has gathered all the necessary details. It makes use of several services to do so:
- **SERPAPI**: To perform Google searches.
- **Jina**: To fetch and extract webpage content.
- **OpenRouter** (default model: `anthropic/claude-3.5-haiku`): To interact with a LLM for generating search queries, evaluating page relevance, and extracting context.
## Features
- **Iterative Research Loop:** The system refines its search queries iteratively until no further queries are required.
- **Asynchronous Processing:** Searches, webpage fetching, evaluation, and context extraction are performed concurrently to improve speed.
- **Duplicate Filtering:** Aggregates and deduplicates links within each round, ensuring that the same link isn’t processed twice.
- **LLM-Powered Decision Making:** Uses the LLM to generate new search queries, decide on page usefulness, extract relevant context, and produce a final comprehensive report.
- **Gradio Interface:** Use the `open-deep-researcher - gradio` notebook if you want to use this in a functional UI
## Requirements
- API access and keys for:
- **OpenRouter API**
- **SERPAPI API**
- **Jina API**
## Setup
1. **Clone or Open the Notebook:**
- Download the notebook file or open it directly in [Google Colab](https://colab.research.google.com/github/mshumer/OpenDeepResearcher/blob/main/open_deep_researcher.ipynb).
2. **Install `nest_asyncio`:**
Run the first cell to set up `nest_asyncio`.
3. **Configure API Keys:**
- Replace the placeholder values in the notebook for `OPENROUTER_API_KEY`, `SERPAPI_API_KEY`, and `JINA_API_KEY` with your actual API keys.
## Usage
1. **Run the Notebook Cells:**
Execute all cells in order. The notebook will prompt you for:
- A research query/topic.
- An optional maximum number of iterations (default is 10).
2. **Follow the Research Process:**
- **Initial Query & Search Generation:** The notebook uses the LLM to generate initial search queries.
- **Asynchronous Searches & Extraction:** It performs SERPAPI searches for all queries concurrently, aggregates unique links, and processes each link in parallel to determine page usefulness and extract relevant context.
- **Iterative Refinement:** After each round, the aggregated context is analyzed by the LLM to determine if further search queries are needed.
- **Final Report:** Once the LLM indicates that no further research is needed (or the iteration limit is reached), a final report is generated based on all gathered context.
3. **View the Final Report:**
The final comprehensive report will be printed in the output.
## How It Works
1. **Input & Query Generation:**
The user enters a research topic, and the LLM generates up to four distinct search queries.
2. **Concurrent Search & Processing:**
- **SERPAPI:** Each search query is sent to SERPAPI concurrently.
- **Deduplication:** All retrieved links are aggregated and deduplicated within the current iteration.
- **Jina & LLM:** Each unique link is processed concurrently to fetch webpage content via Jina, evaluate its usefulness with the LLM, and extract relevant information if the page is deemed useful.
3. **Iterative Refinement:**
The system passes the aggregated context to the LLM to determine if further search queries are needed. New queries are generated if required; otherwise, the loop terminates.
4. **Final Report Generation:**
All gathered context is compiled and sent to the LLM to produce a final, comprehensive report addressing the original query.
## Troubleshooting
- **RuntimeError with asyncio:**
If you encounter an error like:
```
RuntimeError: asyncio.run() cannot be called from a running event loop
```
Ensure you have applied `nest_asyncio` as shown in the setup section.
- **API Issues:**
Verify that your API keys are correct and that you are not exceeding any rate limits.
---
Follow me on [X](https://x.com/mattshumer_) for updates on this and other AI things I'm working on.
OpenDeepResearcher is released under the MIT License. See the LICENSE file for more details. | {
"source": "mshumer/OpenDeepResearcher",
"title": "README.md",
"url": "https://github.com/mshumer/OpenDeepResearcher/blob/main/README.md",
"date": "2025-02-03T23:08:25",
"stars": 2275,
"description": null,
"file_size": 4269
} |
<div align="center">
<picture>
<source srcset="figures/MiniMaxLogo-Dark.png" media="(prefers-color-scheme: dark)">
<img src="figures/MiniMaxLogo-Light.png" width="60%" alt="MiniMax-Text-01">
</source>
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimaxi.com/en" target="_blank" style="margin: 2px; color: var(--fgColor-default);">
<img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://arxiv.org/abs/2501.08313" target="_blank" style="margin: 2px;">
<img alt="Paper" src="https://img.shields.io/badge/📖_Paper-MiniMax--01-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://intl.minimaxi.com" style="margin: 2px;">
<img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/MiniMaxAI" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/🤗_Hugging_Face-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/figures/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
<img alt="WeChat" src="https://img.shields.io/badge/_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/_Model_License-Model_Agreement-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/_Code_License-MIT-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
# MiniMax-Text-01
## 1. Introduction
MiniMax-Text-01 is a powerful language model with 456 billion total parameters, of which 45.9 billion are activated per token. To better unlock the long context capabilities of the model, MiniMax-Text-01 adopts a hybrid architecture that combines Lightning Attention, Softmax Attention and Mixture-of-Experts (MoE). Leveraging advanced parallel strategies and innovative compute-communication overlap methods—such as Linear Attention Sequence Parallelism Plus (LASP+), varlen ring attention, Expert Tensor Parallel (ETP), etc., MiniMax-Text-01's training context length is extended to 1 million tokens, and it can handle a context of up to 4 million tokens during the inference. On various academic benchmarks, MiniMax-Text-01 also demonstrates the performance of a top-tier model.
<p align="center">
<img width="100%" src="figures/TextBench.png">
</p>
## 2. Model Architecture
The architecture of MiniMax-Text-01 is briefly described as follows:
- Total Parameters: 456B
- Activated Parameters per Token: 45.9B
- Number Layers: 80
- Hybrid Attention: a softmax attention is positioned after every 7 lightning attention.
- Number of attention heads: 64
- Attention head dimension: 128
- Mixture of Experts:
- Number of experts: 32
- Expert hidden dimension: 9216
- Top-2 routing strategy
- Positional Encoding: Rotary Position Embedding (RoPE) applied to half of the attention head dimension with a base frequency of 10,000,000
- Hidden Size: 6144
- Vocab Size: 200,064
## 3. Evaluation
### Core Academic Benchmarks
| **Tasks** | **GPT-4o (11-20)** | **Claude-3.5-Sonnet (10-22)** | **Gemini-1.5-Pro (002)** | **Gemini-2.0-Flash (exp)** | **Qwen2.5-72B-Inst.** | **DeepSeek-V3** | **Llama-3.1-405B-Inst.** | **MiniMax-Text-01** |
|-------------------------------|--------------------|-------------------------------|--------------------------|----------------------------|-----------------------|-----------------|--------------------------|---------------------|
| **General** | | | | | | | | |
| MMLU<sup>*</sup> | 85.7 | 88.3 | 86.8 | 86.5 | 86.1 | 88.5 | **88.6** | 88.5 |
| MMLU-Pro<sup>*</sup> | 74.4 | **78.0** | 75.8 | 76.4 | 71.1 | 75.9 | 73.3 | 75.7 |
| SimpleQA | **39.0** | 28.1 | 23.4 | 26.6 | 10.3 | 24.9 | 23.2 | 23.7 |
| C-SimpleQA | 64.6 | 56.8 | 59.4 | 63.3 | 52.2 | 64.8 | 54.7 | **67.4** |
| IFEval _(avg)_ | 84.1 | **90.1** | 89.4 | 88.4 | 87.2 | 87.3 | 86.4 | 89.1 |
| Arena-Hard | **92.4** | 87.6 | 85.3 | 72.7 | 81.2 | 91.4 | 63.5 | 89.1 |
| **Reasoning** | | | | | | | | |
| GPQA<sup>*</sup> _(diamond)_ | 46.0 | **65.0** | 59.1 | 62.1 | 49.0 | 59.1 | 50.7 | 54.4 |
| DROP<sup>*</sup> _(F1)_ | 89.2 | 88.8 | 89.2 | 89.3 | 85.0 | 91.0 | **92.5** | 87.8 |
| **Mathematics** | | | | | | | | |
| GSM8k<sup>*</sup> | 95.6 | **96.9** | 95.2 | 95.4 | 95.8 | 96.7 | 96.7 | 94.8 |
| MATH<sup>*</sup> | 76.6 | 74.1 | **84.6** | 83.9 | 81.8 | **84.6** | 73.8 | 77.4 |
| **Coding** | | | | | | | | |
| MBPP + | 76.2 | 75.1 | 75.4 | 75.9 | 77.0 | **78.8** | 73.0 | 71.7 |
| HumanEval | 90.2 | **93.7** | 86.6 | 89.6 | 86.6 | 92.1 | 89.0 | 86.9 |
<sup>*</sup> Evaluated following a _0-shot CoT_ setting.
### Long Benchmarks
#### 4M Needle In A Haystack Test
<p align="center">
<img width="90%" src="figures/niah.png">
</p>
#### Ruler
| Model | 4k | 8k | 16k | 32k | 64k | 128k | 256k | 512k | 1M |
|-------|----|----|-----|-----|-----|------|------|------|----|
| **GPT-4o (11-20)** | **0.970** | 0.921 | 0.890 | 0.888 | 0.884 | - | - | - | - |
| **Claude-3.5-Sonnet (10-22)** | 0.965 | 0.960 | 0.957 | 0.950 | **0.952** | 0.938 | - | - | - |
| **Gemini-1.5-Pro (002)** | 0.962 | 0.960 | **0.960** | **0.958** | 0.938 | 0.917 | 0.916 | 0.861 | 0.850 |
| **Gemini-2.0-Flash (exp)** | 0.960 | 0.960 | 0.951 | 0.957 | 0.937 | 0.860 | 0.797 | 0.709 | - |
| **MiniMax-Text-01** | 0.963 | **0.961** | 0.953 | 0.954 | 0.943 | **0.947** | **0.945** | **0.928** | **0.910** |
#### LongBench v2
| **Model** | **overall** | **easy** | **hard** | **short** | **medium** | **long** |
|----------------------------|-------------|----------|----------|------------|------------|----------|
| Human | 53.7 | 100.0 | 25.1 | 47.2 | 59.1 | 53.7 |
| **w/ CoT** | | | | | | |
| GPT-4o (11-20) | 51.4 | 54.2 | 49.7 | 59.6 | 48.6 | 43.5 |
| Claude-3.5-Sonnet (10-22) | 46.7 | 55.2 | 41.5 | 53.9 | 41.9 | 44.4 |
| Deepseek-V3 | - | - | - | - | - | - |
| Qwen2.5-72B-Inst. | 43.5 | 47.9 | 40.8 | 48.9 | 40.9 | 39.8 |
| **MiniMax-Text-01** | **56.5** | **66.1** | **50.5** | **61.7** | **56.7** | **47.2** |
| **w/o CoT** | | | | | | |
| GPT-4o (11-20) | 50.1 | 57.4 | 45.6 | 53.3 | 52.4 | 40.2 |
| Claude-3.5-Sonnet (10-22) | 41.0 | 46.9 | 37.3 | 46.1 | 38.6 | 37.0 |
| Deepseek-V3 | 48.7 | - | - | - | - | - |
| Qwen2.5-72B-Inst. | 42.1 | 42.7 | 41.8 | 45.6 | 38.1 | **44.4** |
| **MiniMax-Text-01** | **52.9** | **60.9** | **47.9** | **58.9** | **52.6** | 43.5 |
#### MTOB
| **Context Type** | **no context** | **half book** | **full book** | **Δ half book** | **Δ full book** |
|------------------|----------------|---------------|---------------|------------------|-----------------|
| **eng → kalam (ChrF)** | | | | | |
| GPT-4o (11-20) | 9.90 | **54.30** | - | 44.40 | - |
| Claude-3.5-Sonnet (10-22) | 20.22 | 53.62 | 55.65 | 33.39 | 35.42 |
| Gemini-1.5-Pro (002) | 16.79 | 53.68 | **57.90** | 36.89 | 41.11 |
| Gemini-2.0-Flash (exp) | 12.20 | 49.50 | 53.30 | 37.30 | 41.10 |
| Qwen-Long | 16.55 | 48.48 | 45.94 | 31.92 | 29.39 |
| **MiniMax-Text-01** | 6.0 | 51.74 | 51.60 | **45.7** | **45.6** |
| **kalam → eng (BLEURT)** | | | | | |
| GPT-4o (11-20) | 33.20 | 58.30 | - | 25.10 | - |
| Claude-3.5-Sonnet (10-22) | 31.42 | 59.70 | 62.30 | 28.28 | 30.88 |
| Gemini-1.5-Pro (002) | 32.02 | **61.52** | **63.09** | **29.50** | **31.07** |
| Gemini-2.0-Flash (exp) | 33.80 | 57.50 | 57.00 | 23.70 | 23.20 |
| Qwen-Long | 30.13 | 53.14 | 32.15 | 23.01 | 2.02 |
| **MiniMax-Text-01** | 33.65 | 57.10 | 58.00 | 23.45 | 24.35 |
## 4. Quickstart
Here we provide a simple example of loading the tokenizer and model to generate content.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig, QuantoConfig, GenerationConfig
# load hf config
hf_config = AutoConfig.from_pretrained("MiniMaxAI/MiniMax-Text-01", trust_remote_code=True)
# quantization config, int8 is recommended
quantization_config = QuantoConfig(
weights="int8",
modules_to_not_convert=[
"lm_head",
"embed_tokens",
] + [f"model.layers.{i}.coefficient" for i in range(hf_config.num_hidden_layers)]
+ [f"model.layers.{i}.block_sparse_moe.gate" for i in range(hf_config.num_hidden_layers)]
)
# assume 8 GPUs
world_size = 8
layers_per_device = hf_config.num_hidden_layers // world_size
# set device map
device_map = {
'model.embed_tokens': 'cuda:0',
'model.norm': f'cuda:{world_size - 1}',
'lm_head': f'cuda:{world_size - 1}'
}
for i in range(world_size):
for j in range(layers_per_device):
device_map[f'model.layers.{i * layers_per_device + j}'] = f'cuda:{i}'
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/MiniMax-Text-01")
prompt = "Hello!"
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by MiniMax based on MiniMax-Text-01 model."}]},
{"role": "user", "content": [{"type": "text", "text": prompt}]},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# tokenize and move to device
model_inputs = tokenizer(text, return_tensors="pt").to("cuda")
# load bfloat16 model, move to device, and apply quantization
quantized_model = AutoModelForCausalLM.from_pretrained(
"MiniMaxAI/MiniMax-Text-01",
torch_dtype="bfloat16",
device_map=device_map,
quantization_config=quantization_config,
trust_remote_code=True,
offload_buffers=True,
)
# generate response
generation_config = GenerationConfig(
max_new_tokens=20,
eos_token_id=200020,
use_cache=True,
)
generated_ids = quantized_model.generate(**model_inputs, generation_config=generation_config)
print(f"generated_ids: {generated_ids}")
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## 5. Citation
```
@misc{minimax2025minimax01scalingfoundationmodels,
title={MiniMax-01: Scaling Foundation Models with Lightning Attention},
author={MiniMax and Aonian Li and Bangwei Gong and Bo Yang and Boji Shan and Chang Liu and Cheng Zhu and Chunhao Zhang and Congchao Guo and Da Chen and Dong Li and Enwei Jiao and Gengxin Li and Guojun Zhang and Haohai Sun and Houze Dong and Jiadai Zhu and Jiaqi Zhuang and Jiayuan Song and Jin Zhu and Jingtao Han and Jingyang Li and Junbin Xie and Junhao Xu and Junjie Yan and Kaishun Zhang and Kecheng Xiao and Kexi Kang and Le Han and Leyang Wang and Lianfei Yu and Liheng Feng and Lin Zheng and Linbo Chai and Long Xing and Meizhi Ju and Mingyuan Chi and Mozhi Zhang and Peikai Huang and Pengcheng Niu and Pengfei Li and Pengyu Zhao and Qi Yang and Qidi Xu and Qiexiang Wang and Qin Wang and Qiuhui Li and Ruitao Leng and Shengmin Shi and Shuqi Yu and Sichen Li and Songquan Zhu and Tao Huang and Tianrun Liang and Weigao Sun and Weixuan Sun and Weiyu Cheng and Wenkai Li and Xiangjun Song and Xiao Su and Xiaodong Han and Xinjie Zhang and Xinzhu Hou and Xu Min and Xun Zou and Xuyang Shen and Yan Gong and Yingjie Zhu and Yipeng Zhou and Yiran Zhong and Yongyi Hu and Yuanxiang Fan and Yue Yu and Yufeng Yang and Yuhao Li and Yunan Huang and Yunji Li and Yunpeng Huang and Yunzhi Xu and Yuxin Mao and Zehan Li and Zekang Li and Zewei Tao and Zewen Ying and Zhaoyang Cong and Zhen Qin and Zhenhua Fan and Zhihang Yu and Zhuo Jiang and Zijia Wu},
year={2025},
eprint={2501.08313},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.08313},
}
```
## 6. Chatbot & API
For general use and evaluation, we provide a [Chatbot](https://chat.minimax.io/) with online search capabilities and the [online API](https://intl.minimaxi.com) for developers.
Contact us at [[email protected]](mailto:[email protected]). | {
"source": "MiniMax-AI/MiniMax-01",
"title": "MiniMax-Text-01-Model-Card.md",
"url": "https://github.com/MiniMax-AI/MiniMax-01/blob/main/MiniMax-Text-01-Model-Card.md",
"date": "2025-01-14T15:43:28",
"stars": 2231,
"description": null,
"file_size": 18721
} |
<div align="center">
<picture>
<source srcset="figures/MiniMaxLogo-Dark.png" media="(prefers-color-scheme: dark)">
<img src="figures/MiniMaxLogo-Light.png" width="60%" alt="MiniMax-VL-01">
</source>
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimaxi.com/en" target="_blank" style="margin: 2px; color: var(--fgColor-default);">
<img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://arxiv.org/abs/2501.08313" target="_blank" style="margin: 2px;">
<img alt="Paper" src="https://img.shields.io/badge/📖_Paper-MiniMax--01-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://intl.minimaxi.com" style="margin: 2px;">
<img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/MiniMaxAI" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/🤗_Hugging_Face-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/figures/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
<img alt="WeChat" src="https://img.shields.io/badge/_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/_Model_License-Model_Agreement-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/_Code_License-MIT-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
# MiniMax-VL-01
## 1. Introduction
We are delighted to introduce our **MiniMax-VL-01** model. It adopts the “ViT-MLP-LLM” framework, which is a commonly used technique in the field of multimodal large language models. The model is initialized and trained with three key parts: a 303-million-parameter Vision Transformer (ViT) for visual encoding, a randomly initialized two-layer MLP projector for image adaptation, and the MiniMax-Text-01 as the base LLM.
MiniMax-VL-01 has a notable dynamic resolution feature. Input images are resized per a pre-set grid, with resolutions from 336×336 to 2016×2016, keeping a 336×336 thumbnail. The resized images are split into non-overlapping patches of the same size. These patches and the thumbnail are encoded separately and then combined for a full image representation.
The training data for MiniMax-VL-01 consists of caption, description, and instruction data. The Vision Transformer (ViT) is trained on 694 million image-caption pairs from scratch. Across four distinct stages of the training pipeline, a total of 512 billion tokens are processed, leveraging this vast amount of data to endow the model with strong capabilities.
Finally, MiniMax-VL-01 has reached top-level performance on multimodal leaderboards, demonstrating its edge and dependability in complex multimodal tasks.
<p align="center">
<img width="100%" src="figures/VisionBench.png">
</p>
## 2. Evaluation
| Tasks | GPT-4o<br>(11-20) | Claude-3.5-Sonnet (10-22) | Gemini-1.5-Pro (002) | Gemini-2.0-Flash (exp) | Qwen2-VL-72B-Inst. | InternVL2.5-78B | LLama-3.2-90B | MiniMax-VL-01 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| **Knowledge** | | | | | | | | |
| MMMU<sup>*</sup> | 63.5 | **72.0** | 68.4 | 70.6 | 64.5 | 66.5 | 62.1 | 68.5 |
| MMMU-Pro<sup>*</sup> | 54.5 | 54.7 | 50.9 | **57.0** | 43.2 | 47.3 | 36.0 | 52.7 |
| **Visual Q&A** | | | | | | | | |
| ChartQA<sup>*</sup><sub>relaxed</sub> | 88.1 | 90.8 | 88.7 | 88.3 | 91.2 | 91.5 | 85.5 | **91.7** |
| DocVQA<sup>*</sup> | 91.1 | 94.2 | 91.5 | 92.9 | **97.1** | 96.1 | 90.1 | 96.4 |
| OCRBench | 806 | 790 | 800 | 846 | 856 | 847 | 805 | **865** |
| **Mathematics & Sciences** || | | | | | | |
| AI2D<sup>*</sup> | 83.1 | 82.0 | 80.9 | 85.1 | 84.4 | **86.8** | 78.9 | 83.3 |
| MathVista<sup>*</sup> | 62.1 | 65.4 | 70.6 | **73.1** | 69.6 | 68.4 | 57.3 | 68.6 |
| OlympiadBench<sub>full</sub> | 25.2 | 28.4 | 32.1 | **46.1** | 21.9 | 25.1 | 19.3 | 24.2 |
|**Long Context**|||||
|M-LongDoc<sub>acc</sub>| **41.4** | 31.4 | 26.2 | 31.4 | 11.6 | 19.7 | 13.9 | 32.5 |
|**Comprehensive**|||||
|MEGA-Bench<sub>macro</sub> | 49.4 | 51.4 | 45.9 | **53.9** | 46.8 | 45.3 | 19.9 | 47.4 |
|**User Experience**|||||
|In-house Benchmark | 62.3 | 47.0 | 49.2 | **72.1** | 40.6 | 34.8 | 13.6 | 56.6 |
<sup>*</sup> Evaluated following a _0-shot CoT_ setting.
## 3. Quickstart
Here we provide a simple example of loading the tokenizer and model to generate content.
```python
from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig, QuantoConfig, GenerationConfig
import torch
import json
import os
from PIL import Image
# load hf config
hf_config = AutoConfig.from_pretrained("MiniMaxAI/MiniMax-VL-01", trust_remote_code=True)
# quantization config, int8 is recommended
quantization_config = QuantoConfig(
weights="int8",
modules_to_not_convert=[
"vision_tower",
"image_newline",
"multi_modal_projector",
"lm_head",
"embed_tokens",
] + [f"model.layers.{i}.coefficient" for i in range(hf_config.text_config.num_hidden_layers)]
+ [f"model.layers.{i}.block_sparse_moe.gate" for i in range(hf_config.text_config.num_hidden_layers)]
)
# set device map
model_safetensors_index_path = os.path.join("MiniMax-VL-01", "model.safetensors.index.json")
with open(model_safetensors_index_path, "r") as f:
model_safetensors_index = json.load(f)
weight_map = model_safetensors_index['weight_map']
vision_map = {}
for key, value in weight_map.items():
if 'vision_tower' in key or 'image_newline' in key or 'multi_modal_projector' in key:
new_key = key.replace('.weight','').replace('.bias','')
if new_key not in vision_map:
vision_map[new_key] = value
# assume 8 GPUs
world_size = 8
device_map = {
'language_model.model.embed_tokens': 'cuda:0',
'language_model.model.norm': f'cuda:{world_size - 1}',
'language_model.lm_head': f'cuda:{world_size - 1}'
}
for key, value in vision_map.items():
device_map[key] = f'cuda:0'
device_map['vision_tower.vision_model.post_layernorm'] = f'cuda:0'
layers_per_device = hf_config.text_config.num_hidden_layers // world_size
for i in range(world_size):
for j in range(layers_per_device):
device_map[f'language_model.model.layers.{i * layers_per_device + j}'] = f'cuda:{i}'
# load processor
processor = AutoProcessor.from_pretrained("MiniMaxAI/MiniMax-VL-01", trust_remote_code=True)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by MiniMax based on MiniMax-VL-01 model."}]},
{"role": "user", "content": [{"type": "image", "image": "placeholder"},{"type": "text", "text": "Describe this image."}]},
]
prompt = processor.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
raw_image = Image.open("figures/image.jpg")
# tokenize and move to device
model_inputs = processor(images=[raw_image], text=prompt, return_tensors='pt').to('cuda').to(torch.bfloat16)
# load bfloat16 model, move to device, and apply quantization
quantized_model = AutoModelForCausalLM.from_pretrained(
"MiniMaxAI/MiniMax-VL-01",
torch_dtype="bfloat16",
device_map=device_map,
quantization_config=quantization_config,
trust_remote_code=True,
offload_buffers=True,
)
generation_config = GenerationConfig(
max_new_tokens=100,
eos_token_id=200020,
use_cache=True,
)
# generate response
generated_ids = quantized_model.generate(**model_inputs, generation_config=generation_config)
print(f"generated_ids: {generated_ids}")
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = processor.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
# 4. Citation
```
@misc{minimax2025minimax01scalingfoundationmodels,
title={MiniMax-01: Scaling Foundation Models with Lightning Attention},
author={MiniMax and Aonian Li and Bangwei Gong and Bo Yang and Boji Shan and Chang Liu and Cheng Zhu and Chunhao Zhang and Congchao Guo and Da Chen and Dong Li and Enwei Jiao and Gengxin Li and Guojun Zhang and Haohai Sun and Houze Dong and Jiadai Zhu and Jiaqi Zhuang and Jiayuan Song and Jin Zhu and Jingtao Han and Jingyang Li and Junbin Xie and Junhao Xu and Junjie Yan and Kaishun Zhang and Kecheng Xiao and Kexi Kang and Le Han and Leyang Wang and Lianfei Yu and Liheng Feng and Lin Zheng and Linbo Chai and Long Xing and Meizhi Ju and Mingyuan Chi and Mozhi Zhang and Peikai Huang and Pengcheng Niu and Pengfei Li and Pengyu Zhao and Qi Yang and Qidi Xu and Qiexiang Wang and Qin Wang and Qiuhui Li and Ruitao Leng and Shengmin Shi and Shuqi Yu and Sichen Li and Songquan Zhu and Tao Huang and Tianrun Liang and Weigao Sun and Weixuan Sun and Weiyu Cheng and Wenkai Li and Xiangjun Song and Xiao Su and Xiaodong Han and Xinjie Zhang and Xinzhu Hou and Xu Min and Xun Zou and Xuyang Shen and Yan Gong and Yingjie Zhu and Yipeng Zhou and Yiran Zhong and Yongyi Hu and Yuanxiang Fan and Yue Yu and Yufeng Yang and Yuhao Li and Yunan Huang and Yunji Li and Yunpeng Huang and Yunzhi Xu and Yuxin Mao and Zehan Li and Zekang Li and Zewei Tao and Zewen Ying and Zhaoyang Cong and Zhen Qin and Zhenhua Fan and Zhihang Yu and Zhuo Jiang and Zijia Wu},
year={2025},
eprint={2501.08313},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.08313},
}
```
## 5. Chatbot & API
For general use and evaluation, we provide a [Chatbot](https://chat.minimax.io/) with online search capabilities and the [online API](https://intl.minimaxi.com) for developers.
Contact us at [[email protected]](mailto:[email protected]). | {
"source": "MiniMax-AI/MiniMax-01",
"title": "MiniMax-VL-01-Model-Card.md",
"url": "https://github.com/MiniMax-AI/MiniMax-01/blob/main/MiniMax-VL-01-Model-Card.md",
"date": "2025-01-14T15:43:28",
"stars": 2231,
"description": null,
"file_size": 13386
} |
<div align="center">
<picture>
<source srcset="figures/MiniMaxLogo-Dark.png" media="(prefers-color-scheme: dark)">
<img src="figures/MiniMaxLogo-Light.png" width="60%" alt="MiniMax">
</source>
</picture>
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.minimaxi.com/en" target="_blank" style="margin: 2px; color: var(--fgColor-default);">
<img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://arxiv.org/abs/2501.08313" target="_blank" style="margin: 2px;">
<img alt="Paper" src="https://img.shields.io/badge/📖_Paper-MiniMax--01-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://intl.minimaxi.com" style="margin: 2px;">
<img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/MiniMaxAI" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/🤗_Hugging_Face-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/figures/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
<img alt="WeChat" src="https://img.shields.io/badge/_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/_Model_License-Model_Agreement-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/_Code_License-MIT-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
# MiniMax-01
## 1. Introduction
We are delighted to introduce two remarkable models, **MiniMax-Text-01** and **MiniMax-VL-01**.
MiniMax-Text-01 is a powerful language model boasting 456 billion total parameters, with 45.9 billion activated per token. To unlock its long-context capabilities, it adopts a hybrid architecture integrating Lightning Attention, Softmax Attention, and Mixture-of-Experts (MoE). Leveraging advanced parallel strategies like Linear Attention Sequence Parallelism Plus (LASP+), varlen ring attention, and Expert Tensor Parallel (ETP), its training context length extends to 1 million tokens, and it can handle up to 4 million tokens during inference. Consequently, MiniMax-Text-01 showcases top-tier performance on various academic benchmarks.
Building on MiniMax-Text-01's prowess, we developed MiniMax-VL-01 for enhanced visual capabilities. It uses the “ViT-MLP-LLM” framework common in multimodal LLMs. It is initialized and trained using three key components: a 303-million-parameter Vision Transformer (ViT) for visual encoding, a randomly initialized two-layer MLP projector for image adaptation, and MiniMax-Text-01 as the base LLM. This model features a dynamic resolution mechanism. Input images are resized according to a pre-set grid, with resolutions ranging from 336×336 to 2016×2016, while maintaining a 336×336 thumbnail. The resized images are split into non - overlapping patches of the same size. These patches and the thumbnail are encoded separately and then combined to form a full image representation. As a result, MiniMax-VL-01 has achieved top-level performance on multimodal leaderboards, demonstrating its edge in complex multimodal tasks.
<p align="center">
<img width="100%" src="figures/TextBench.png">
</p>
<p align="center">
<img width="100%" src="figures/VisionBench.png">
</p>
## 2. Model Architecture
The architecture of MiniMax-Text-01 is briefly described as follows:
- Total Parameters: 456B
- Activated Parameters per Token: 45.9B
- Number Layers: 80
- Hybrid Attention: a softmax attention is positioned after every 7 lightning attention.
- Number of attention heads: 64
- Attention head dimension: 128
- Mixture of Experts:
- Number of experts: 32
- Expert hidden dimension: 9216
- Top-2 routing strategy
- Positional Encoding: Rotary Position Embedding (RoPE) applied to half of the attention head dimension with a base frequency of 10,000,000
- Hidden Size: 6144
- Vocab Size: 200,064
For MiniMax-VL-01, the additional ViT architecture details is as follows:
- Total Parameters: 303M
- Number of layers: 24
- Patch size: 14
- Hidden size: 1024
- FFN hidden size: 4096
- Number of heads: 16
- Attention head dimension: 64
## 3. Evaluation
### Text Benchmarks
#### Core Academic Benchmarks
| **Tasks** | **GPT-4o (11-20)** | **Claude-3.5-Sonnet (10-22)** | **Gemini-1.5-Pro (002)** | **Gemini-2.0-Flash (exp)** | **Qwen2.5-72B-Inst.** | **DeepSeek-V3** | **Llama-3.1-405B-Inst.** | **MiniMax-Text-01** |
|-------------------------------|--------------------|-------------------------------|--------------------------|----------------------------|-----------------------|-----------------|--------------------------|---------------------|
| **General** | | | | | | | | |
| MMLU<sup>*</sup> | 85.7 | 88.3 | 86.8 | 86.5 | 86.1 | 88.5 | **88.6** | 88.5 |
| MMLU-Pro<sup>*</sup> | 74.4 | **78.0** | 75.8 | 76.4 | 71.1 | 75.9 | 73.3 | 75.7 |
| SimpleQA | **39.0** | 28.1 | 23.4 | 26.6 | 10.3 | 24.9 | 23.2 | 23.7 |
| C-SimpleQA | 64.6 | 56.8 | 59.4 | 63.3 | 52.2 | 64.8 | 54.7 | **67.4** |
| IFEval _(avg)_ | 84.1 | **90.1** | 89.4 | 88.4 | 87.2 | 87.3 | 86.4 | 89.1 |
| Arena-Hard | **92.4** | 87.6 | 85.3 | 72.7 | 81.2 | 91.4 | 63.5 | 89.1 |
| **Reasoning** | | | | | | | | |
| GPQA<sup>*</sup> _(diamond)_ | 46.0 | **65.0** | 59.1 | 62.1 | 49.0 | 59.1 | 50.7 | 54.4 |
| DROP<sup>*</sup> _(F1)_ | 89.2 | 88.8 | 89.2 | 89.3 | 85.0 | 91.0 | **92.5** | 87.8 |
| **Mathematics** | | | | | | | | |
| GSM8k<sup>*</sup> | 95.6 | **96.9** | 95.2 | 95.4 | 95.8 | 96.7 | 96.7 | 94.8 |
| MATH<sup>*</sup> | 76.6 | 74.1 | **84.6** | 83.9 | 81.8 | **84.6** | 73.8 | 77.4 |
| **Coding** | | | | | | | | |
| MBPP + | 76.2 | 75.1 | 75.4 | 75.9 | 77.0 | **78.8** | 73.0 | 71.7 |
| HumanEval | 90.2 | **93.7** | 86.6 | 89.6 | 86.6 | 92.1 | 89.0 | 86.9 |
<sup>*</sup> Evaluated following a _0-shot CoT_ setting.
#### Long Benchmarks
**4M Needle In A Haystack Test**
<p align="center">
<img width="90%" src="figures/niah.png">
</p>
**Ruler**
| Model | 4k | 8k | 16k | 32k | 64k | 128k | 256k | 512k | 1M |
|-------|----|----|-----|-----|-----|------|------|------|----|
| **GPT-4o (11-20)** | **0.970** | 0.921 | 0.890 | 0.888 | 0.884 | - | - | - | - |
| **Claude-3.5-Sonnet (10-22)** | 0.965 | 0.960 | 0.957 | 0.950 | **0.952** | 0.938 | - | - | - |
| **Gemini-1.5-Pro (002)** | 0.962 | 0.960 | **0.960** | **0.958** | 0.938 | 0.917 | 0.916 | 0.861 | 0.850 |
| **Gemini-2.0-Flash (exp)** | 0.960 | 0.960 | 0.951 | 0.957 | 0.937 | 0.860 | 0.797 | 0.709 | - |
| **MiniMax-Text-01** | 0.963 | **0.961** | 0.953 | 0.954 | 0.943 | **0.947** | **0.945** | **0.928** | **0.910** |
**LongBench v2**
| **Model** | **overall** | **easy** | **hard** | **short** | **medium** | **long** |
|----------------------------|-------------|----------|----------|------------|------------|----------|
| Human | 53.7 | 100.0 | 25.1 | 47.2 | 59.1 | 53.7 |
| **w/ CoT** | | | | | | |
| GPT-4o (11-20) | 51.4 | 54.2 | 49.7 | 59.6 | 48.6 | 43.5 |
| Claude-3.5-Sonnet (10-22) | 46.7 | 55.2 | 41.5 | 53.9 | 41.9 | 44.4 |
| Deepseek-V3 | - | - | - | - | - | - |
| Qwen2.5-72B-Inst. | 43.5 | 47.9 | 40.8 | 48.9 | 40.9 | 39.8 |
| **MiniMax-Text-01** | **56.5** | **66.1** | **50.5** | **61.7** | **56.7** | **47.2** |
| **w/o CoT** | | | | | | |
| GPT-4o (11-20) | 50.1 | 57.4 | 45.6 | 53.3 | 52.4 | 40.2 |
| Claude-3.5-Sonnet (10-22) | 41.0 | 46.9 | 37.3 | 46.1 | 38.6 | 37.0 |
| Deepseek-V3 | 48.7 | - | - | - | - | - |
| Qwen2.5-72B-Inst. | 42.1 | 42.7 | 41.8 | 45.6 | 38.1 | **44.4** |
| **MiniMax-Text-01** | **52.9** | **60.9** | **47.9** | **58.9** | **52.6** | 43.5 |
**MTOB**
| **Context Type** | **no context** | **half book** | **full book** | **Δ half book** | **Δ full book** |
|------------------|----------------|---------------|---------------|------------------|-----------------|
| **eng → kalam (ChrF)** | | | | | |
| GPT-4o (11-20) | 9.90 | **54.30** | - | 44.40 | - |
| Claude-3.5-Sonnet (10-22) | 20.22 | 53.62 | 55.65 | 33.39 | 35.42 |
| Gemini-1.5-Pro (002) | 16.79 | 53.68 | **57.90** | 36.89 | 41.11 |
| Gemini-2.0-Flash (exp) | 12.20 | 49.50 | 53.30 | 37.30 | 41.10 |
| Qwen-Long | 16.55 | 48.48 | 45.94 | 31.92 | 29.39 |
| **MiniMax-Text-01** | 6.0 | 51.74 | 51.60 | **45.7** | **45.6** |
| **kalam → eng (BLEURT)** | | | | | |
| GPT-4o (11-20) | 33.20 | 58.30 | - | 25.10 | - |
| Claude-3.5-Sonnet (10-22) | 31.42 | 59.70 | 62.30 | 28.28 | 30.88 |
| Gemini-1.5-Pro (002) | 32.02 | **61.52** | **63.09** | **29.50** | **31.07** |
| Gemini-2.0-Flash (exp) | 33.80 | 57.50 | 57.00 | 23.70 | 23.20 |
| Qwen-Long | 30.13 | 53.14 | 32.15 | 23.01 | 2.02 |
| **MiniMax-Text-01** | 33.65 | 57.10 | 58.00 | 23.45 | 24.35 |
### Vision Benchmarks
| Tasks | GPT-4o<br>(11-20) | Claude-3.5-Sonnet (10-22) | Gemini-1.5-Pro (002) | Gemini-2.0-Flash (exp) | Qwen2-VL-72B-Inst. | InternVL2.5-78B | LLama-3.2-90B | MiniMax-VL-01 |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| **Knowledge** | | | | | | | | |
| MMMU<sup>*</sup> | 63.5 | **72.0** | 68.4 | 70.6 | 64.5 | 66.5 | 62.1 | 68.5 |
| MMMU-Pro<sup>*</sup> | 54.5 | 54.7 | 50.9 | **57.0** | 43.2 | 47.3 | 36.0 | 52.7 |
| **Visual Q&A** | | | | | | | | |
| ChartQA<sup>*</sup><sub>relaxed</sub> | 88.1 | 90.8 | 88.7 | 88.3 | 91.2 | 91.5 | 85.5 | **91.7** |
| DocVQA<sup>*</sup> | 91.1 | 94.2 | 91.5 | 92.9 | **97.1** | 96.1 | 90.1 | 96.4 |
| OCRBench | 806 | 790 | 800 | 846 | 856 | 847 | 805 | **865** |
| **Mathematics & Sciences** || | | | | | | |
| AI2D<sup>*</sup> | 83.1 | 82.0 | 80.9 | 85.1 | 84.4 | **86.8** | 78.9 | 83.3 |
| MathVista<sup>*</sup> | 62.1 | 65.4 | 70.6 | **73.1** | 69.6 | 68.4 | 57.3 | 68.6 |
| OlympiadBench<sub>full</sub> | 25.2 | 28.4 | 32.1 | **46.1** | 21.9 | 25.1 | 19.3 | 24.2 |
|**Long Context**|||||
|M-LongDoc<sub>acc</sub>| **41.4** | 31.4 | 26.2 | 31.4 | 11.6 | 19.7 | 13.9 | 32.5 |
|**Comprehensive**|||||
|MEGA-Bench<sub>macro</sub> | 49.4 | 51.4 | 45.9 | **53.9** | 46.8 | 45.3 | 19.9 | 47.4 |
|**User Experience**|||||
|In-house Benchmark | 62.3 | 47.0 | 49.2 | **72.1** | 40.6 | 34.8 | 13.6 | 56.6 |
<sup>*</sup> Evaluated following a _0-shot CoT_ setting.
## 4. Quickstart
Here, we provide a simple example to demonstrate how to use MiniMax-Text-01 and MiniMax-VL-01 respectively
### MiniMax-Text-01
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig, QuantoConfig, GenerationConfig
# load hf config
hf_config = AutoConfig.from_pretrained("MiniMaxAI/MiniMax-Text-01", trust_remote_code=True)
# quantization config, int8 is recommended
quantization_config = QuantoConfig(
weights="int8",
modules_to_not_convert=[
"lm_head",
"embed_tokens",
] + [f"model.layers.{i}.coefficient" for i in range(hf_config.num_hidden_layers)]
+ [f"model.layers.{i}.block_sparse_moe.gate" for i in range(hf_config.num_hidden_layers)]
)
# assume 8 GPUs
world_size = 8
layers_per_device = hf_config.num_hidden_layers // world_size
# set device map
device_map = {
'model.embed_tokens': 'cuda:0',
'model.norm': f'cuda:{world_size - 1}',
'lm_head': f'cuda:{world_size - 1}'
}
for i in range(world_size):
for j in range(layers_per_device):
device_map[f'model.layers.{i * layers_per_device + j}'] = f'cuda:{i}'
# load tokenizer
tokenizer = AutoTokenizer.from_pretrained("MiniMaxAI/MiniMax-Text-01")
prompt = "Hello!"
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by MiniMax based on MiniMax-Text-01 model."}]},
{"role": "user", "content": [{"type": "text", "text": prompt}]},
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# tokenize and move to device
model_inputs = tokenizer(text, return_tensors="pt").to("cuda")
# load bfloat16 model, move to device, and apply quantization
quantized_model = AutoModelForCausalLM.from_pretrained(
"MiniMaxAI/MiniMax-Text-01",
torch_dtype="bfloat16",
device_map=device_map,
quantization_config=quantization_config,
trust_remote_code=True,
offload_buffers=True,
)
# generate response
generation_config = GenerationConfig(
max_new_tokens=20,
eos_token_id=200020,
use_cache=True,
)
generated_ids = quantized_model.generate(**model_inputs, generation_config=generation_config)
print(f"generated_ids: {generated_ids}")
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### MiniMax-VL-01
```python
from transformers import AutoModelForCausalLM, AutoProcessor, AutoConfig, QuantoConfig, GenerationConfig
import torch
import json
import os
from PIL import Image
# load hf config
hf_config = AutoConfig.from_pretrained("MiniMaxAI/MiniMax-VL-01", trust_remote_code=True)
# quantization config, int8 is recommended
quantization_config = QuantoConfig(
weights="int8",
modules_to_not_convert=[
"vision_tower",
"image_newline",
"multi_modal_projector",
"lm_head",
"embed_tokens",
] + [f"model.layers.{i}.coefficient" for i in range(hf_config.text_config.num_hidden_layers)]
+ [f"model.layers.{i}.block_sparse_moe.gate" for i in range(hf_config.text_config.num_hidden_layers)]
)
# set device map
model_safetensors_index_path = os.path.join("MiniMax-VL-01", "model.safetensors.index.json")
with open(model_safetensors_index_path, "r") as f:
model_safetensors_index = json.load(f)
weight_map = model_safetensors_index['weight_map']
vision_map = {}
for key, value in weight_map.items():
if 'vision_tower' in key or 'image_newline' in key or 'multi_modal_projector' in key:
new_key = key.replace('.weight','').replace('.bias','')
if new_key not in vision_map:
vision_map[new_key] = value
# assume 8 GPUs
world_size = 8
device_map = {
'language_model.model.embed_tokens': 'cuda:0',
'language_model.model.norm': f'cuda:{world_size - 1}',
'language_model.lm_head': f'cuda:{world_size - 1}'
}
for key, value in vision_map.items():
device_map[key] = f'cuda:0'
device_map['vision_tower.vision_model.post_layernorm'] = f'cuda:0'
layers_per_device = hf_config.text_config.num_hidden_layers // world_size
for i in range(world_size):
for j in range(layers_per_device):
device_map[f'language_model.model.layers.{i * layers_per_device + j}'] = f'cuda:{i}'
# load processor
processor = AutoProcessor.from_pretrained("MiniMaxAI/MiniMax-VL-01", trust_remote_code=True)
messages = [
{"role": "system", "content": [{"type": "text", "text": "You are a helpful assistant created by MiniMax based on MiniMax-VL-01 model."}]},
{"role": "user", "content": [{"type": "image", "image": "placeholder"},{"type": "text", "text": "Describe this image."}]},
]
prompt = processor.tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
raw_image = Image.open("figures/image.jpg")
# tokenize and move to device
model_inputs = processor(images=[raw_image], text=prompt, return_tensors='pt').to('cuda').to(torch.bfloat16)
# load bfloat16 model, move to device, and apply quantization
quantized_model = AutoModelForCausalLM.from_pretrained(
"MiniMaxAI/MiniMax-VL-01",
torch_dtype="bfloat16",
device_map=device_map,
quantization_config=quantization_config,
trust_remote_code=True,
offload_buffers=True,
)
generation_config = GenerationConfig(
max_new_tokens=100,
eos_token_id=200020,
use_cache=True,
)
# generate response
generated_ids = quantized_model.generate(**model_inputs, generation_config=generation_config)
print(f"generated_ids: {generated_ids}")
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = processor.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## 5. Citation
```
@misc{minimax2025minimax01scalingfoundationmodels,
title={MiniMax-01: Scaling Foundation Models with Lightning Attention},
author={MiniMax and Aonian Li and Bangwei Gong and Bo Yang and Boji Shan and Chang Liu and Cheng Zhu and Chunhao Zhang and Congchao Guo and Da Chen and Dong Li and Enwei Jiao and Gengxin Li and Guojun Zhang and Haohai Sun and Houze Dong and Jiadai Zhu and Jiaqi Zhuang and Jiayuan Song and Jin Zhu and Jingtao Han and Jingyang Li and Junbin Xie and Junhao Xu and Junjie Yan and Kaishun Zhang and Kecheng Xiao and Kexi Kang and Le Han and Leyang Wang and Lianfei Yu and Liheng Feng and Lin Zheng and Linbo Chai and Long Xing and Meizhi Ju and Mingyuan Chi and Mozhi Zhang and Peikai Huang and Pengcheng Niu and Pengfei Li and Pengyu Zhao and Qi Yang and Qidi Xu and Qiexiang Wang and Qin Wang and Qiuhui Li and Ruitao Leng and Shengmin Shi and Shuqi Yu and Sichen Li and Songquan Zhu and Tao Huang and Tianrun Liang and Weigao Sun and Weixuan Sun and Weiyu Cheng and Wenkai Li and Xiangjun Song and Xiao Su and Xiaodong Han and Xinjie Zhang and Xinzhu Hou and Xu Min and Xun Zou and Xuyang Shen and Yan Gong and Yingjie Zhu and Yipeng Zhou and Yiran Zhong and Yongyi Hu and Yuanxiang Fan and Yue Yu and Yufeng Yang and Yuhao Li and Yunan Huang and Yunji Li and Yunpeng Huang and Yunzhi Xu and Yuxin Mao and Zehan Li and Zekang Li and Zewei Tao and Zewen Ying and Zhaoyang Cong and Zhen Qin and Zhenhua Fan and Zhihang Yu and Zhuo Jiang and Zijia Wu},
year={2025},
eprint={2501.08313},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.08313},
}
```
## 6. Chatbot & API
For general use and evaluation, we provide a [Chatbot](https://chat.minimax.io/) with online search capabilities and the [online API](https://intl.minimaxi.com) for developers.
Contact us at [[email protected]](mailto:[email protected]). | {
"source": "MiniMax-AI/MiniMax-01",
"title": "README.md",
"url": "https://github.com/MiniMax-AI/MiniMax-01/blob/main/README.md",
"date": "2025-01-14T15:43:28",
"stars": 2231,
"description": null,
"file_size": 24872
} |
# Awesome AI/ML Resources
This repository contains free resources and a roadmap to learn Machine Learning and Artificial Intelligence in 2025.
## 📌 AI/ML Key Concepts
- [Supervised Learning](https://medium.com/@kodeinkgp/supervised-learning-a-comprehensive-guide-7032b34d5097)
- [Unsupervised Learning](https://cloud.google.com/discover/what-is-unsupervised-learning?hl=en#what-is-unsupervised-learning)
- [Reinforcement Learning](https://spinningup.openai.com/en/latest/user/introduction.html#what-this-is)
- [Deep Learning](https://www.datacamp.com/tutorial/tutorial-deep-learning-tutorial)
- [Natural Language Processing (NLP)](https://medium.com/@ageitgey/natural-language-processing-is-fun-9a0bff37854e)
- [Computer Vision](https://www.geeksforgeeks.org/computer-vision/)
- [Generative adversarial networks (GANs)](https://aws.amazon.com/what-is/gan/)
- [Dimensionality Reduction](https://scikit-learn.org/stable/modules/decomposition.html)
- [Clustering Algorithms](https://scikit-learn.org/stable/modules/clustering.html)
- [Bayesian Inference](https://www.statlect.com/fundamentals-of-statistics/Bayesian-inference#:~:text=Bayesian%20inference%20is%20a%20way,that%20could%20generate%20the%20data.)
- [Time Series Analysis](https://otexts.com/fpp3/)
- [Self-Supervised Learning](https://lilianweng.github.io/posts/2021-05-31-self-supervised-learning/)
## 🛠️ AI/ML Building Blocks
- [Linear Algebra for Machine Learning](https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/)
- [Probability & Statistics](https://www.youtube.com/watch?v=2MuDZIAzBMY&list=PLoROMvodv4rOpr_A7B9SriE_iZmkanvUg)
- [Calculus for Optimization](https://www.khanacademy.org/math/multivariable-calculus)
- [Python for Machine Learning](https://www.coursera.org/learn/ai-python-for-beginners)
- [Optimization Techniques](https://www.geeksforgeeks.org/optimization-algorithms-in-machine-learning/)
- [Data Preprocessing & Feature Engineering](https://www.geeksforgeeks.org/what-is-feature-engineering/)
- [Model Evaluation & Metrics](https://scikit-learn.org/stable/modules/model_evaluation.html)
- [Regularization Techniques](https://www.geeksforgeeks.org/regularization-in-machine-learning/)
- [Loss Functions](https://www.datacamp.com/tutorial/loss-function-in-machine-learning)
- [Activation Functions](https://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html)
- [Hyperparameter Tuning](https://www.geeksforgeeks.org/hyperparameter-tuning/)
## 👨🏽💻 AI/ML Roles
- [Machine Learning Engineer](https://www.coursera.org/articles/what-is-machine-learning-engineer)
- [Data Scientist](https://www.coursera.org/articles/what-is-a-data-scientist)
- [Software Engineer (AI)](https://www.coursera.org/articles/ai-engineer)
- [ML/AI Platform Engineer](https://ml-ops.org/)
- [ML/AI Infrastructure Engineer](https://www.databricks.com/glossary/mlops)
- [Framework Engineer](https://careers.qualcomm.com/careers/job/446698240161)
- [Solution Architect](https://www.coursera.org/articles/solutions-architect)
- [Developer Advocate](https://www.freecodecamp.org/news/what-the-heck-is-a-developer-advocate-87ab4faccfc4/)
- [Solutions Engineer](https://www.coursera.org/articles/solutions-engineer)
- [Applied Research Scientist](https://www.indeed.com/career-advice/finding-a-job/data-scientist-vs-research-scientist-vs-applied-scientist)
- [Research Engineer](https://www.indeed.com/career-advice/finding-a-job/research-engineers)
- [Research Scientist](https://www.coursera.org/articles/research-scientist)
## 🚗 AI/ML Roadmap
1. Learn Python and Core Libraries
- [Intro Python](https://cs50.harvard.edu/python/2022/)
- [Advanced Python](https://www.edx.org/learn/artificial-intelligence/harvard-university-cs50-s-introduction-to-artificial-intelligence-with-python)
- [NumPy: Numerical computing and arrays](https://numpy.org/devdocs/user/quickstart.html)
- [Pandas: Data manipulation and analysis](https://www.w3schools.com/python/pandas/default.asp)
- [Matplotlib & Seaborn: Data visualization](https://matplotlib.org/stable/tutorials/index.html)
- [scikit-learn: Implement ML algorithms](https://scikit-learn.org/1.4/tutorial/index.html)
2. Build a Strong Math Foundation
- [Linear Algebra](https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/)
- [Probability & Statistics](https://web.stanford.edu/class/stats116/syllabus.html)
- [Calculus](https://www.khanacademy.org/math/multivariable-calculus)
3. Learn Machine Learning Fundamentals
- [Google Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course)
- [Machine Learning by Andrew Ng](https://www.coursera.org/learn/machine-learning)
- [Read Hundred-Page ML Book](http://ema.cri-info.cm/wp-content/uploads/2019/07/2019BurkovTheHundred-pageMachineLearning.pdf)
4. Build Practical Experience
- [Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
- [Practical Deep Learning for Coders](https://course.fast.ai/)
- [Structured Machine Learning Projects](https://www.coursera.org/learn/machine-learning-projects)
- [Build GPT](https://www.youtube.com/watch?v=kCc8FmEb1nY&t=1331s)
5. Deepen Knowledge in Specialized Areas
- [Natural Language Processing](https://huggingface.co/learn/nlp-course/chapter1/1)
- [Reinforcement Learning](https://huggingface.co/learn/deep-rl-course/unit0/introduction)
- [Computer Vision](https://www.kaggle.com/learn/computer-vision)
- [Deep Learning](https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PLSVEhWrZWDHQTBmWZufjxpw3s8sveJtnJ&index=1)
- [Transformers](https://huggingface.co/learn/nlp-course/chapter1/1)
6. Learn about MLOps
- [Intro to MLOps](https://ml-ops.org/)
- [Three levels of ML](https://ml-ops.org/content/three-levels-of-ml-software)
- [Fullstackdeeplearning](https://fullstackdeeplearning.com/course/2022/)
7. Read Interesting Research Papers
- [ArXiv for Research Papers](https://arxiv.org/)
8. Prepare for AI/ML Job Interviews
- [Introduction to Machine Learning Interviews](https://huyenchip.com/ml-interviews-book/)
- [Designing Machine Learning Systems](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/)
## 📚 Courses
- [Machine Learning by Andrew Ng (Coursera)](https://www.coursera.org/learn/machine-learning)
- [AI For Everyone by Andrew Ng (Coursera)](https://www.coursera.org/learn/ai-for-everyone)
- [Deep Learning Specialization (Coursera)](https://www.coursera.org/specializations/deep-learning)
- [Machine Learning with Python (edX - IBM)](https://www.edx.org/course/machine-learning-with-python-a-practical-introduct)
- [Reinforcement Learning Specialization (Coursera)](https://www.coursera.org/specializations/reinforcement-learning)
- [CS231n: Convolutional Neural Networks for Visual Recognition (Stanford)](https://www.youtube.com/watch?v=vT1JzLTH4G4&list=PLSVEhWrZWDHQTBmWZufjxpw3s8sveJtnJ&index=1)
- [RL Course by David Silver](https://www.youtube.com/watch?v=2pWv7GOvuf0&list=PLqYmG7hTraZDM-OYHWgPebj2MfCFzFObQ)
- [Natural Language Processing with Deep Learning (Stanford - CS224n)](https://www.youtube.com/watch?v=rmVRLeJRkl4&list=PLoROMvodv4rMFqRtEuo6SGjY4XbRIVRd4&index=1)
- [Fast.ai’s Practical Deep Learning for Coders](https://course.fast.ai/)
## 🎓 Certifications
- [AWS Certified Machine Learning Engineer – Associate](https://aws.amazon.com/certification/certified-machine-learning-engineer-associate/)
- [Microsoft Certified: Azure AI Engineer Associate](https://learn.microsoft.com/en-us/certifications/azure-ai-engineer/)
- [Stanford AI and Machine Learning Certificate](https://online.stanford.edu/programs/artificial-intelligence-professional-program)
## 📕 Books
- [Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
- [AI Engineering: Building Applications with Foundational Models](https://www.oreilly.com/library/view/ai-engineering/9781098166298/)
- [Introduction to Machine Learning Interviews](https://huyenchip.com/ml-interviews-book/)
- [Designing Data Intensive Applications](https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/)
- [Designing Machine Learning Systems](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/)
- [Deep Learning](https://www.deeplearningbook.org/)
## 🛠️ Tools & Frameworks
- [PyTorch](https://www.youtube.com/watch?v=V_xro1bcAuA)
- [TensorFlow](https://www.youtube.com/watch?v=tPYj3fFJGjk)
- [Scikit-Learn](https://scikit-learn.org/stable/getting_started.html)
- [XGBoost](https://xgboost.readthedocs.io/en/latest/)
- [Keras](https://keras.io/getting_started/)
- [Perplexity](https://www.perplexity.ai/)
- [CursorAI](https://www.cursor.com/)
- [Whisper](https://github.com/openai/whisper)
## AI/ML Research Blogs
- [OpenAI Blog](https://openai.com/news/)
- [Google DeepMind](https://deepmind.google/discover/blog/)
- [Google Research](https://research.google/blog/)
- [Apple ML Research](https://machinelearning.apple.com/)
- [Amazon Science](https://www.amazon.science/blog?f0=0000016e-2fb1-d205-a5ef-afb9d52c0000&f0=0000016e-2ff0-da81-a5ef-3ff057f10000&f0=0000016e-2ff1-d205-a5ef-aff9651e0000)
- [Microsoft AI](https://www.microsoft.com/en-us/ai/blog/)
- [Meta AI Blog](https://ai.meta.com/blog/?page=1)
## AI/ML Applied Blogs
- [AWS Machine Learning Blog](https://aws.amazon.com/blogs/machine-learning/)
- [NVIDIA - Deep Learning Blog](https://blogs.nvidia.com/blog/category/deep-learning/)
- [AirBnB Engineering, AI & ML](https://medium.com/airbnb-engineering/ai/home)
- [Spotify Engineering](https://engineering.atspotify.com/)
- [Uber Engineering](https://eng.uber.com/category/articles/ai/)
- [Netflix Blog](https://netflixtechblog.com/)
- [Google AI](https://blog.google/technology/ai/)
## AI/ML Problems
### Easy
- [Matrix times Vector](https://www.deep-ml.com/problems/1)
- [Titanic: Machine Learning from Disaster](https://www.kaggle.com/c/titanic)
- [Predicting House Prices Using Linear Regression](https://www.kaggle.com/competitions/home-data-for-ml-course)
### Medium
- [Single Neuron](https://www.deep-ml.com/problems/24)
- [K-Means Clustering](https://www.deep-ml.com/problems/17)
- [Predicting Loan Default Risk](https://www.kaggle.com/c/home-credit-default-risk)
- [Sentiment Analysis on Movie Reviews](https://www.kaggle.com/c/sentiment-analysis-on-movie-reviews)
### Hard
- [Decision Tree Learning](https://www.deep-ml.com/problems/20)
- [Implement a Simple RNN with Backpropagation](https://www.deep-ml.com/problems/62)
- [Generative Adversarial Networks (GANs) for Image Synthesis](https://www.kaggle.com/c/generative-dog-images)
## ⚡️ AI/ML Communities
- [r/LearnMachineLearning](https://www.reddit.com/r/learnmachinelearning/)
- [Chip Huyen MLOps Discord](https://discord.com/invite/dzh728c5t3)
- [Hugging Face Discord](https://discord.com/invite/hugging-face-879548962464493619)
## 📺 Youtube Channels
- [Stanford Online](https://www.youtube.com/watch?v=jGwO_UgTS7I&list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU)
- [Andrej Karpathy](https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ)
- [FreeCodeCamp](https://www.youtube.com/watch?v=i_LwzRVP7bg)
- [3Blue1Brown](https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi)
- [Sentdex](https://www.youtube.com/watch?v=OGxgnH8y2NM&list=PLQVvvaa0QuDfKTOs3Keq_kaG2P55YRn5v)
## 📩 Newsletters
- [The AI Engineer](https://aimlengineer.io)
## 📃 Must Read Papers
- [Attention Is All You Need (Google)](https://arxiv.org/pdf/1706.03762)
- [DeepSeek R1: Incentivizing Reasoning Capability in LLMs](https://arxiv.org/pdf/2501.12948)
- [Monolith: Real Time Recommendation System (TikTok/ByteDance)](https://arxiv.org/pdf/2209.07663)
- [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805)
- [Understanding Deep Learning Requires Rethinking Generalization](https://arxiv.org/pdf/1611.03530)
- [Playing Atari with Deep Reinforcement Learning](https://arxiv.org/pdf/1312.5602)
- [Distilling the Knowledge in a Neural Network](https://arxiv.org/pdf/1503.02531)
- [Open AI Key Papers in Deep RL](https://spinningup.openai.com/en/latest/spinningup/keypapers.html) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "README.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/README.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 12385
} |
## Awesome AI/ML books to read
- [Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/)
- [AI Engineering: Building Applications with Foundational Models](https://www.oreilly.com/library/view/ai-engineering/9781098166298/)
- [Probabilistic Machine Learning: An Introduction](https://probml.github.io/pml-book/book1.html)
- [Introduction to Machine Learning Interviews](https://huyenchip.com/ml-interviews-book/)
- [Data Structures and Algorithms in Python](https://www.amazon.com/Structures-Algorithms-Python-Michael-Goodrich/dp/1118290275)
- [Designing Data Intensive Applications](https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/)
- [Designing Machine Learning Systems](https://www.oreilly.com/library/view/designing-machine-learning/9781098107956/)
- [Pattern Recognition and Machine Learning](https://www.springer.com/gp/book/9780387310732)
- [Reinforcement Learning: An Introduction](http://incompleteideas.net/book/the-book-2nd.html)
- [Machine Learning: A Probabilistic Perspective](https://mitpress.mit.edu/9780262018029/machine-learning/)
- [You Look Like a Thing and I Love You](https://www.amazon.com/You-Look-Like-Thing-Love/dp/0316525243)
- [The Hundred-Page Machine Learning Book](https://themlbook.com/)
- [Machine Learning Yearning](http://www.mlyearning.org/)
- [Grokking Deep Learning](https://www.manning.com/books/grokking-deep-learning)
- [Deep Learning](https://www.deeplearningbook.org/) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "books.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/books.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 1544
} |
## Awesome AI/ML Interview Prep
- [Introduction to Machine Learning Interviews](https://huyenchip.com/ml-interviews-book/)
- [Acing AI Interviews](https://medium.com/acing-ai/acing-ai-interviews/home) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "interviews.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/interviews.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 200
} |
## Awesome AI/ML Newsletters
- [AI Weekly](https://aiweekly.co/)
- [Rundown AI](https://www.therundown.ai/)
- [The AI/ML Engineer](https://www.aimlengineer.io/)
- [Artificial Intelligence Made Simple](https://artificialintelligencemadesimple.substack.com/?utm_source=recommendations_page&utm_campaign=1744179) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "newsletters.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/newsletters.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 309
} |
## Awesome AI/ML Projects
### Beginner Projects
- [Titanic Survival Prediction](https://www.kaggle.com/c/titanic)
- [Iris Flower Classification](https://www.kaggle.com/datasets/uciml/iris)
- [House Price Prediction](https://www.kaggle.com/c/house-prices-advanced-regression-techniques)
- [Movie Recommendation System](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset)
### Intermediate Projects
- [Sentiment Analysis on Twitter Data](https://www.kaggle.com/c/nlp-getting-started)
- [Image Classification with CNNs](https://www.kaggle.com/c/digit-recognizer)
- [Loan Default Prediction](https://www.kaggle.com/c/home-credit-default-risk)
- [Time Series Forecasting with ARIMA](https://www.kaggle.com/c/web-traffic-time-series-forecasting)
### Advanced Projects
- [Generative Adversarial Networks (GANs) for Image Generation](https://www.kaggle.com/c/generative-dog-images)
- [End-to-End MLOps Pipeline](https://mlflow.org/docs/latest/index.html)
- [Face Recognition System](https://www.kaggle.com/datasets/ashishjangra27/face-mask-12k-images-dataset)
- [Build Your Own GPT Model](https://www.youtube.com/watch?v=kCc8FmEb1nY&t=1331s) | {
"source": "armankhondker/awesome-ai-ml-resources",
"title": "projects.md",
"url": "https://github.com/armankhondker/awesome-ai-ml-resources/blob/main/projects.md",
"date": "2025-02-09T00:12:17",
"stars": 2154,
"description": "Learn AI/ML for beginners with a roadmap and free resources. ",
"file_size": 1147
} |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations. | {
"source": "clusterzx/paperless-ai",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents.",
"file_size": 5201
} |
## Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request | {
"source": "clusterzx/paperless-ai",
"title": "CONTRIBUTING.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/CONTRIBUTING.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents.",
"file_size": 267
} |
# Privacy Policy for Paperless-AI Chat Extension
Last updated: 16.01.2025
## 1. General Information
The Paperless-AI Chat Extension ("the Extension") is a browser extension designed to enhance document interaction in Paperless-ngx through AI-powered chat functionality. We are committed to protecting your privacy and personal data.
## 2. Data Controller
Email: clusterz[at]protonmail.com
## 3. Data Collection and Processing
### 3.1 Stored Data
The Extension stores the following data locally in your browser:
- URL of your Paperless-ngx installation
- URL of your Paperless-AI server
- API key for the Paperless-AI service
This data is stored exclusively in the Chrome Storage Sync API and is only accessible by the Extension.
### 3.2 Document Content Processing
- The Extension only accesses document content when you actively use the chat function for a specific document
- Document contents are transmitted exclusively to your configured Paperless-AI server
- No document content is transmitted to third parties
### 3.3 Chat History
- Chat histories are only temporarily held in browser memory
- This data is deleted when closing the chat window
- No permanent storage of chat histories occurs in the Extension
## 4. Data Transmission
The Extension transmits data exclusively to:
- Your self-hosted Paperless-ngx installation
- Your self-configured Paperless-AI server
No data is transmitted to the Extension developers or other third parties.
## 5. Permissions
The Extension requires the following browser permissions:
- "storage": For saving your configuration settings
- "activeTab": For integrating chat functionality into the Paperless-ngx interface
- "host_permissions": For communication with your Paperless-ngx and Paperless-AI servers
## 6. Data Security
- All communication with your servers is encrypted via HTTPS
- The API key is securely stored in the Chrome Storage system
- The Extension implements best practices for handling sensitive data
## 7. Your Rights
You have the right to:
- Uninstall the Extension at any time
- Delete your stored settings
- Cease using the Extension at any time
Under GDPR, you also have the following rights:
- Right to access your personal data
- Right to rectification
- Right to erasure ("right to be forgotten")
- Right to restrict processing
- Right to data portability
- Right to object
## 8. Changes to Privacy Policy
We reserve the right to modify this privacy policy when necessary, in compliance with applicable data protection regulations. The current version can always be found at [Link to Privacy Policy].
## 9. Contact
If you have any questions about data protection, you can contact us at any time:
clusterz[at]protonmail.com
## 10. Consent
By installing and using the Extension, you agree to this privacy policy. You can withdraw your consent at any time by uninstalling the Extension.
## 11. Technical Details
### 11.1 Data Storage Location
All configuration data is stored locally in your browser using Chrome's secure storage APIs. No data is stored on our servers.
### 11.2 Data Processing
- Document content is processed only when explicitly requested through the chat interface
- Processing occurs on your configured Paperless-AI server
- No content caching or storage occurs within the Extension
### 11.3 Security Measures
- All API communications use HTTPS encryption
- API keys are stored using Chrome's secure storage system
- No logging or tracking of user activities
- No analytics or tracking code is included in the Extension
## 12. Children's Privacy
The Extension is not intended for use by children under the age of 13. We do not knowingly collect or process data from children under 13 years of age.
## 13. International Data Transfers
As the Extension operates entirely within your browser and communicates only with servers you configure, no international data transfers occur through our services. | {
"source": "clusterzx/paperless-ai",
"title": "PRIVACY_POLICY.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/PRIVACY_POLICY.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents.",
"file_size": 3919
} |
   
# Paperless-AI
An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents. \
It features: Automode, Manual Mode, Ollama and OpenAI, a Chat function to query your documents with AI, a modern and intuitive Webinterface. \
\
**Following Services and OpenAI API compatible services have been successfully tested:**
- Ollama
- OpenAI
- DeepSeek.ai
- OpenRouter.ai
- Perplexity.ai
- Together.ai
- VLLM
- LiteLLM
- Fastchat
- Gemini (Google)
- ... and there are possibly many more

## Features
### Automated Document Management
- **Automatic Scanning**: Identifies and processes new documents within Paperless-ngx.
- **AI-Powered Analysis**: Leverages OpenAI API and Ollama (Mistral, Llama, Phi 3, Gemma 2) for precise document analysis.
- **Metadata Assignment**: Automatically assigns titles, tags, document_type and correspondent details.
### Advanced Customization Options
- **Predefined Processing Rules**: Specify which documents to process based on existing tags. *(Optional)* 🆕
- **Selective Tag Assignment**: Use only selected tags for processing. *(Disables the prompt dialog)* 🆕
- **Custom Tagging**: Assign a specific tag (of your choice) to AI-processed documents for easy identification. 🆕
### Manual Mode
- **AI-Assisted Analysis**: Manually analyze documents with AI support in a modern web interface. *(Accessible via the `/manual` endpoint)* 🆕
### Interactive Chat Functionality
- **Document Querying**: Ask questions about your documents and receive accurate, AI-generated answers. 🆕
## Installation
Visit the Wiki for installation:\
[Click here for Installation](https://github.com/clusterzx/paperless-ai/wiki/2.-Installation)
-------------------------------------------
## Docker Support
The application comes with full Docker support:
- Automatic container restart on failure
- Health monitoring
- Volume persistence for database
- Resource management
- Graceful shutdown handling
## Development
To run the application locally without Docker:
1. Install dependencies:
```bash
npm install
```
2. Start the development server:
```bash
npm run test
```
## Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- [Paperless-ngx](https://github.com/paperless-ngx/paperless-ngx) for the amazing document management system
- OpenAI API
- The Express.js and Node.js communities for their excellent tools
## Support
If you encounter any issues or have questions:
1. Check the [Issues](https://github.com/clusterzx/paperless-ai/issues) section
2. Create a new issue if yours isn't already listed
3. Provide detailed information about your setup and the problem
## Roadmap (DONE)
- [x] Support for custom AI models
- [x] Support for multiple language analysis
- [x] Advanced tag matching algorithms
- [x] Custom rules for document processing
- [x] Enhanced web interface with statistics | {
"source": "clusterzx/paperless-ai",
"title": "README.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/README.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents.",
"file_size": 3693
} |
# Security Policy
## Supported Versions
| Version | Supported |
| ------- | ------------------ |
| 2.5.2 | :white_check_mark: |
| 2.5.0 | :white_check_mark: |
| 1.9.x | :x: |
| < 1.9 | :x: |
## Reporting a Vulnerability
If you find a security vulnerability please open an issue. | {
"source": "clusterzx/paperless-ai",
"title": "SECURITY.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/SECURITY.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents.",
"file_size": 314
} |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. | {
"source": "clusterzx/paperless-ai",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/clusterzx/paperless-ai/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-12-01T16:41:07",
"stars": 2149,
"description": "An automated document analyzer for Paperless-ngx using OpenAI API, Ollama and all OpenAI API compatible Services to automatically analyze and tag your documents.",
"file_size": 594
} |
# SQL tips and tricks
[](https://stand-with-ukraine.pp.ua)
A (somewhat opinionated) list of SQL tips and tricks that I've picked up over the years.
There's so much you can you do with SQL but I've focused on what I find most useful in my day-to-day work as a data analyst and what
I wish I had known when I first started writing SQL.
Please note that some of these tips might not be relevant for all RDBMs.
## Table of contents
### Formatting/readability
1) [Use a leading comma to separate fields](#use-a-leading-comma-to-separate-fields)
2) [Use a dummy value in the WHERE clause](#use-a-dummy-value-in-the-where-clause)
3) [Indent your code](#indent-your-code)
4) [Consider CTEs when writing complex queries](#consider-ctes-when-writing-complex-queries)
### Useful features
5) [Anti-joins will return rows from one table that have no match in another table](#anti-joins-will-return-rows-from-one-table-that-have-no-match-in-another-table)
6) [`NOT EXISTS` is faster than `NOT IN` if your column allows `NULL`](#not-exists-is-faster-than-not-in-if-your-column-allows-null)
7) [Use `QUALIFY` to filter window functions](#use-qualify-to-filter-window-functions)
8) [You can (but shouldn't always) `GROUP BY` column position](#you-can-but-shouldnt-always-group-by-column-position)
9) [You can create a grand total with `GROUP BY ROLLUP`](#you-can-create-a-grand-total-with-group-by-rollup)
10) [Use `EXCEPT` to find the difference between two tables](#use-except-to-find-the-difference-between-two-tables)
### Avoid pitfalls
11) [Be aware of how `NOT IN` behaves with `NULL` values](#be-aware-of-how-not-in-behaves-with-null-values)
12) [Avoid ambiguity when naming calculated fields](#avoid-ambiguity-when-naming-calculated-fields)
13) [Always specify which column belongs to which table](#always-specify-which-column-belongs-to-which-table)
14) [Understand the order of execution](#understand-the-order-of-execution)
15) [Comment your code!](#comment-your-code)
16) [Read the documentation (in full)](#read-the-documentation-in-full)
17) [Use descriptive names for your saved queries](#use-descriptive-names-for-your-saved-queries)
## Formatting/readability
### Use a leading comma to separate fields
Use a leading comma to separate fields in the `SELECT` clause rather than a trailing comma.
- Clearly defines that this is a new column vs code that's wrapped to multiple lines.
- Visual cue to easily identify if the comma is missing or not. Varying line lengths makes it harder to determine.
```SQL
SELECT
employee_id
, employee_name
, job
, salary
FROM employees
;
```
- Also use a leading `AND` in the `WHERE` clause, for the same reasons (following tip demonstrates this).
### **Use a dummy value in the WHERE clause**
Use a dummy value in the `WHERE` clause so you can easily comment out conditions when testing or tweaking a query.
```SQL
/*
If I want to comment out the job
condition the following query
will break:
*/
SELECT *
FROM employees
WHERE
--job IN ('Clerk', 'Manager')
AND dept_no != 5
;
/*
With a dummy value there's no issue.
I can comment out all the conditions
and 1=1 will ensure the query still runs:
*/
SELECT *
FROM employees
WHERE 1=1
-- AND job IN ('Clerk', 'Manager')
AND dept_no != 5
;
```
### Indent your code
Indent your code to make it more readable to colleagues and your future self.
Opinions will vary on what this looks like so be sure to follow your company/team's guidelines or, if that doesn't exist, go with whatever works for you.
You can also use an online formatter like [poorsql](https://poorsql.com/) or a linter like [sqlfluff](https://github.com/sqlfluff/sqlfluff).
``` SQL
SELECT
-- Bad:
vc.video_id
, CASE WHEN meta.GENRE IN ('Drama', 'Comedy') THEN 'Entertainment' ELSE meta.GENRE END as content_type
FROM video_content AS vc
INNER JOIN metadata ON vc.video_id = metadata.video_id
;
-- Good:
SELECT
vc.video_id
, CASE
WHEN meta.GENRE IN ('Drama', 'Comedy') THEN 'Entertainment'
ELSE meta.GENRE
END AS content_type
FROM video_content
INNER JOIN metadata
ON video_content.video_id = metadata.video_id
;
```
### Consider CTEs when writing complex queries
For longer than I'd care to admit I would nest inline views, which would lead to
queries that were hard to understand, particularly if revisited after a few weeks.
If you find yourself nesting inline views more than 2 or 3 levels deep,
consider using common table expressions, which can help you keep your code more organised and readable.
```SQL
-- Using inline views:
SELECT
vhs.movie
, vhs.vhs_revenue
, cs.cinema_revenue
FROM
(
SELECT
movie_id
, SUM(ticket_sales) AS cinema_revenue
FROM tickets
GROUP BY movie_id
) AS cs
INNER JOIN
(
SELECT
movie
, movie_id
, SUM(revenue) AS vhs_revenue
FROM blockbuster
GROUP BY movie, movie_id
) AS vhs
ON cs.movie_id = vhs.movie_id
;
-- Using CTEs:
WITH cinema_sales AS
(
SELECT
movie_id
, SUM(ticket_sales) AS cinema_revenue
FROM tickets
GROUP BY movie_id
),
vhs_sales AS
(
SELECT
movie
, movie_id
, SUM(revenue) AS vhs_revenue
FROM blockbuster
GROUP BY movie, movie_id
)
SELECT
vhs.movie
, vhs.vhs_revenue
, cs.cinema_revenue
FROM cinema_sales AS cs
INNER JOIN vhs_sales AS vhs
ON cs.movie_id = vhs.movie_id
;
```
## Useful features
### Anti-joins will return rows from one table that have no match in another table
Use anti-joins when you want to return rows from one table that don't have a match in another table.
For example, you only want video IDs of content that hasn't been archived.
There are multiple ways to do an anti-join:
```SQL
-- Using a LEFT JOIN:
SELECT
vc.video_id
FROM video_content AS vc
LEFT JOIN archive
ON vc.video_id = archive.video_id
WHERE 1=1
AND archive.video_id IS NULL -- Any rows with no match will have a NULL value.
;
-- Using NOT IN/subquery:
SELECT
video_id
FROM video_content
WHERE 1=1
AND video_id NOT IN (SELECT video_id FROM archive) -- Be mindful of NULL values.
-- Using NOT EXISTS/correlated subquery:
SELECT
video_id
FROM video_content AS vc
WHERE 1=1
AND NOT EXISTS (
SELECT 1
FROM archive AS a
WHERE a.video_id = vc.video_id
)
```
Note that I advise against using `NOT IN` - see the following tip.
### `NOT EXISTS` is faster than `NOT IN` if your column allows `NULL`
If you're using an anti-join with `NOT IN` you'll likely find it's slower than using `NOT EXISTS`, if the values/column you're comparing against allows `NULL`.
I've experienced this when using Snowflake and the PostgreSQL Wiki explicity [calls this out](https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_NOT_IN):
*"...NOT IN (SELECT ...) does not optimize very well."*
Aside from being slow, using `NOT IN` will not work as intended if there is a `NULL` in the values being compared against - see [tip 11](#be-aware-of-how-not-in-behaves-with-null-values).
### Use `QUALIFY` to filter window functions
`QUALIFY` lets you filter the results of a query based on a window function, meaning you don't need
to use an inline view to filter your result set and thus reducing the number of lines of code needed.
For example, if I want to return the top 10 markets per product I can use
`QUALIFY` rather than an inline view:
```SQL
-- Using QUALIFY:
SELECT
product
, market
, SUM(revenue) AS market_revenue
FROM sales
GROUP BY product, market
QUALIFY DENSE_RANK() OVER (PARTITION BY product ORDER BY SUM(revenue) DESC) <= 10
ORDER BY product, market_revenue
;
-- Without QUALIFY:
SELECT
product
, market
, market_revenue
FROM
(
SELECT
product
, market
, SUM(revenue) AS market_revenue
, DENSE_RANK() OVER (PARTITION BY product ORDER BY SUM(revenue) DESC) AS market_rank
FROM sales
GROUP BY product, market
)
WHERE market_rank <= 10
ORDER BY product, market_revenue
;
```
Unfortunately it looks like `QUALIFY` is only available in the big data warehouses (Snowflake, Amazon Redshift, Google BigQuery) but I had to include this because it's so useful.
### You can (but shouldn't always) `GROUP BY` column position
Instead of using the column name, you can `GROUP BY` or `ORDER BY` using the
column position.
- This can be useful for ad-hoc/one-off queries, but for production code
you should always refer to a column by its name.
```SQL
SELECT
dept_no
, SUM(salary) AS dept_salary
FROM employees
GROUP BY 1 -- dept_no is the first column in the SELECT clause.
ORDER BY 2 DESC
;
```
### You can create a grand total with `GROUP BY ROLLUP`
Creating a grand total (or sub-totals) is possible thanks to `GROUP BY ROLLUP`.
For example, if you've aggregated a company's employees salary per department you
can use `GROUP BY ROLLUP` to create a grand total that sums up the aggregated
`dept_salary` column.
```SQL
SELECT
COALESCE(dept_no, 'Total') AS dept_no
, SUM(salary) AS dept_salary
FROM employees
GROUP BY ROLLUP(dept_no)
ORDER BY dept_salary -- Be sure to order by this column to ensure the Total appears last/at the bottom of the result set.
;
```
### Use `EXCEPT` to find the difference between two tables
`EXCEPT` returns rows from the first query's result set that don't appear in the second query's result set.
```SQL
/*
Miles Davis will be returned from
this query
*/
SELECT artist_name
FROM artist
WHERE artist_name = 'Miles Davis'
EXCEPT
SELECT artist_name
FROM artist
WHERE artist_name = 'Nirvana'
;
/*
Nothing will be returned from this
query as 'Miles Davis' appears in
both queries' result sets.
*/
SELECT artist_name
FROM artist
WHERE artist_name = 'Miles Davis'
EXCEPT
SELECT artist_name
FROM artist
WHERE artist_name = 'Miles Davis'
;
```
You can also utilise `EXCEPT` with `UNION ALL` to verify whether two tables have the same data.
If no rows are returned the tables are identical - otherwise, what's returned are rows causing the difference:
```SQL
/*
The first query will return rows from
employees that aren't present in
department.
The second query will return rows from
department that aren't present in employees.
The UNION ALL will ensure that the
final result set returned combines
these all of these rows so you know
which rows are causing the difference.
*/
(
SELECT
id
, employee_name
FROM employees
EXCEPT
SELECT
id
, employee_name
FROM department
)
UNION ALL
(
SELECT
id
, employee_name
FROM department
EXCEPT
SELECT
id
, employee_name
FROM employees
)
;
```
## Avoid pitfalls
### Be aware of how `NOT IN` behaves with `NULL` values
`NOT IN` doesn't work if `NULL` is present in the values being checked against. As `NULL` represents Unknown the SQL engine can't verify that the value being checked is not present in the list.
- Instead use `NOT EXISTS`.
``` SQL
INSERT INTO departments (id)
VALUES (1), (2), (NULL);
-- Doesn't work due to NULL:
SELECT *
FROM employees
WHERE department_id NOT IN (SELECT DISTINCT id from departments)
;
-- Solution.
SELECT *
FROM employees e
WHERE NOT EXISTS (
SELECT 1
FROM departments d
WHERE d.id = e.department_id
)
;
```
### Avoid ambiguity when naming calculated fields
When creating a calculated field, you might be tempted to name it the same as an existing column, but this can lead to unexpected behaviour, such as a
window function operating on the wrong field.
```SQL
CREATE TABLE products (
product VARCHAR(50) NOT NULL,
revenue INT NOT NULL
)
;
INSERT INTO products (product, revenue)
VALUES
('Shark', 100),
('Robot', 150),
('Alien', 90);
/*
The window function will rank
the 'Robot' product as 1 when
it should be 3
*/
SELECT
product
, CASE product WHEN 'Robot' THEN 0 ELSE revenue END AS revenue
, RANK() OVER (ORDER BY revenue DESC)
FROM products
;
/*
You can instead do this:
*/
SELECT
product
, CASE product WHEN 'Robot' THEN 0 ELSE revenue END AS revenue
, RANK() OVER (ORDER BY CASE product WHEN 'Robot' THEN 0 ELSE revenue END DESC)
FROM products
;
```
### Always specify which column belongs to which table
When you have complex queries with multiple joins, it pays to be able to
trace back an issue with a value to its source.
Additionally, your RDBMS might raise an error if two tables share the same
column name and you don't specify which column you are using.
```SQL
SELECT
vc.video_id
, vc.series_name
, metadata.season
, metadata.episode_number
FROM video_content AS vc
INNER JOIN video_metadata AS metadata
ON vc.video_id = metadata.video_id
;
```
### Understand the order of execution
If I had to give one piece of advice to someone learning SQL, it'd be to understand the order of
execution (of clauses). It will completely change how you write queries. This [blog post](https://blog.jooq.org/a-beginners-guide-to-the-true-order-of-sql-operations/) is a fantastic resource for learning.
### Comment your code!
While in the moment you know why you did something, if you revisit
the code weeks, months or years later you might not remember.
- In general you should strive to write comments that explain why you did something, not how.
- Your colleagues and future self will thank you!
```SQL
SELECT
video_content.*
FROM video_content
LEFT JOIN archive -- New CMS cannot process archive video formats.
ON video_content.video_id = archive.video_id
WHERE 1=1
AND archive.video_id IS NULL
;
```
### Read the documentation (in full)
Using Snowflake I once needed to return the latest date from a list of columns
and so I decided to use `GREATEST()`.
What I didn't realise was that if one of the
arguments is `NULL` then the function returns `NULL`.
If I'd read the documentation in full I'd have known! In many cases it can take just a minute or less to scan
the documentation and it will save you the headache of having to work
out why something isn't working the way you expected:
```SQL
/*
If I'd read the documentation
further I'd also have realised
that my solution to the NULL
problem with GREATEST()...
*/
SELECT COALESCE(GREATEST(signup_date, consumption_date), signup_date, consumption_date);
/*
... could have been solved with the
following function:
*/
SELECT GREATEST_IGNORE_NULLS(signup_date, consumption_date);
```
### Use descriptive names for your saved queries
There's almost nothing worse than not being able to find a query you need to re-run/refer back to.
Use a descriptive name when saving your queries so you can easily find what you're looking for.
I usually will write the subject of the query, the month the query was ran and the name of the requester (if they exist).
For example: `Lapsed users analysis - 2023-09-01 - Olivia Roberts` | {
"source": "ben-nour/SQL-tips-and-tricks",
"title": "README.md",
"url": "https://github.com/ben-nour/SQL-tips-and-tricks/blob/main/README.md",
"date": "2024-09-19T05:29:04",
"stars": 2143,
"description": "SQL tips and tricks",
"file_size": 14798
} |
# Evo 2: Genome modeling and design across all domains of life

Evo 2 is a state of the art DNA language model for long context modeling and design. Evo 2 models DNA sequences at single-nucleotide resolution at up to 1 million base pair context length using the [StripedHyena 2](https://github.com/Zymrael/savanna/blob/main/paper.pdf) architecture. Evo 2 was pretrained using [Savanna](https://github.com/Zymrael/savanna). Evo 2 was trained autoregressively on [OpenGenome2](https://huggingface.co/datasets/arcinstitute/opengenome2), a dataset containing 8.8 trillion tokens from all domains of life.
We describe Evo 2 in the preprint:
["Genome modeling and design across all domains of life with Evo 2"](https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1).
## Contents
- [Setup](#setup)
- [Requirements](#requirements)
- [Installation](#installation)
- [Checkpoints](#checkpoints)
- [Usage](#usage)
- [Forward](#forward)
- [Embeddings](#embeddings)
- [Generation](#generation)
- [Notebooks](#notebooks)
- [Dataset](#dataset)
- [Training Code](#dataset)
- [Citation](#citation)
## Setup
### Requirements
Evo 2 is based on [StripedHyena 2](https://github.com/Zymrael/vortex) which requires python>=3.11. Evo 2 uses [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) FP8 for some layers which requires an H100 (or other GPU with compute capability ≥8.9). We are actively investigating ways to avoid this requirement.
You can run Evo 2 without any installation using the [Nvidia Hosted API](https://build.nvidia.com/arc/evo2-40b).
You can also self-host an instance with the same API as the Nvidia hosted API using Nvidia NIM. See the [Nvidia NIM](#nvidia-nim-for-evo-2) section for more
information.
### Installation
Please clone and install from GitHub. We recommend using a new conda environment with python>=3.11.
```bash
git clone --recurse-submodules [email protected]:ArcInstitute/evo2.git
cd evo2
pip install .
```
If this did not work for whatever reason, you can also install from [Vortex](https://github.com/Zymrael/vortex) and follow the instructions there. PyPi support coming soon!
You can check that the installation was correct by running a test.
```
python ./test/test_evo2.py --model_name evo2_7b
```
## Checkpoints
We provide the following model checkpoints, hosted on [HuggingFace](https://huggingface.co/arcinstitute):
| Checkpoint Name | Description |
|----------------------------------------|-------------|
| `evo2_40b` | A model pretrained with 1 million context obtained through context extension of `evo2_40b_base`.|
| `evo2_7b` | A model pretrained with 1 million context obtained through context extension of `evo2_7b_base`.|
| `evo2_40b_base` | A model pretrained with 8192 context length.|
| `evo2_7b_base` | A model pretrained with 8192 context length.|
| `evo2_1b_base` | A smaller model pretrained with 8192 context length.|
To use Evo 2 40B, you will need multiple GPUs. Vortex automatically handles device placement, splitting the model across available cuda devices.
## Usage
Below are simple examples of how to download Evo 2 and use it locally using Python.
### Forward
Evo 2 can be used to score the likelihoods across a DNA sequence.
```python
import torch
from evo2 import Evo2
evo2_model = Evo2('evo2_7b')
sequence = 'ACGT'
input_ids = torch.tensor(
evo2_model.tokenizer.tokenize(sequence),
dtype=torch.int,
).unsqueeze(0).to('cuda:0')
outputs, _ = evo2_model(input_ids)
logits = outputs[0]
print('Logits: ', logits)
print('Shape (batch, length, vocab): ', logits.shape)
```
### Embeddings
Evo 2 embeddings can be saved for use downstream.
```python
import torch
from evo2 import Evo2
evo2_model = Evo2('evo2_7b')
sequence = 'ACGT'
input_ids = torch.tensor(
evo2_model.tokenizer.tokenize(sequence),
dtype=torch.int,
).unsqueeze(0).to('cuda:0')
layer_name = 'blocks.28.mlp.l3'
outputs, embeddings = evo2_model(input_ids, return_embeddings=True, layer_names=[layer_name])
print('Embeddings shape: ', embeddings[layer_name].shape)
```
### Generation
Evo 2 can generate DNA sequences based on prompts.
```python
from evo2 import Evo2
evo2_model = Evo2('evo2_7b')
output = evo2_model.generate(prompt_seqs=["ACGT"], n_tokens=400, temperature=1.0, top_k=4)
print(output.sequences[0])
```
### Notebooks
We provide an example [notebook](https://github.com/ArcInstitute/evo2/blob/main/notebooks/brca1/brca1_zero_shot_vep.ipynb) of zero-shot *BRCA1* variant effect prediction. This example includes a walkthrough of:
- Performing zero-shot *BRCA1* variant effect predictions using Evo 2
- Reference vs alternative allele normalization
### NVIDIA NIM for Evo 2
Evo 2 is available on [NVIDIA NIM](https://catalog.ngc.nvidia.com/containers?filters=&orderBy=scoreDESC&query=evo2&page=&pageSize=).
- [Documentation](https://docs.nvidia.com/nim/bionemo/evo2/latest/overview.html)
- [Quickstart](https://docs.nvidia.com/nim/bionemo/evo2/latest/quickstart-guide.html)
The quickstart guides users through running Evo 2 on the NVIDIA NIM using a python or shell client after starting NIM. An example python client script is shown below. This is the same way you would interact with the [Nvidia hosted API](https://build.nvidia.com/arc/evo2-40b?snippet_tab=Python).
```python
#!/usr/bin/env python3
import requests
import os
import json
from pathlib import Path
key = os.getenv("NVCF_RUN_KEY") or input("Paste the Run Key: ")
r = requests.post(
url=os.getenv("URL", "https://health.api.nvidia.com/v1/biology/arc/evo2-40b/generate"),
headers={"Authorization": f"Bearer {key}"},
json={
"sequence": "ACTGACTGACTGACTG",
"num_tokens": 8,
"top_k": 1,
"enable_sampled_probs": True,
},
)
if "application/json" in r.headers.get("Content-Type", ""):
print(r, "Saving to output.json:\n", r.text[:200], "...")
Path("output.json").write_text(r.text)
elif "application/zip" in r.headers.get("Content-Type", ""):
print(r, "Saving large response to data.zip")
Path("data.zip").write_bytes(r.content)
else:
print(r, r.headers, r.content)
```
### Very long sequences
We are actively working on optimizing performance for long sequence processing. Vortex can currently compute over very long sequences via teacher prompting. However please note that forward pass on long sequences may currently be slow.
## Dataset
The OpenGenome2 dataset used for pretraining Evo2 is available on [HuggingFace ](https://huggingface.co/datasets/arcinstitute/opengenome2). Data is available either as raw fastas or as JSONL files which include preprocessing and data augmentation.
## Training Code
Evo 2 was trained using [Savanna](https://github.com/Zymrael/savanna), an open source framework for training alternative architectures.
## Citation
If you find these models useful for your research, please cite the relevant papers
```
@article {Brixi2025.02.18.638918,
author = {Brixi, Garyk and Durrant, Matthew G and Ku, Jerome and Poli, Michael and Brockman, Greg and Chang, Daniel and Gonzalez, Gabriel A and King, Samuel H and Li, David B and Merchant, Aditi T and Naghipourfar, Mohsen and Nguyen, Eric and Ricci-Tam, Chiara and Romero, David W and Sun, Gwanggyu and Taghibakshi, Ali and Vorontsov, Anton and Yang, Brandon and Deng, Myra and Gorton, Liv and Nguyen, Nam and Wang, Nicholas K and Adams, Etowah and Baccus, Stephen A and Dillmann, Steven and Ermon, Stefano and Guo, Daniel and Ilango, Rajesh and Janik, Ken and Lu, Amy X and Mehta, Reshma and Mofrad, Mohammad R.K. and Ng, Madelena Y and Pannu, Jaspreet and Re, Christopher and Schmok, Jonathan C and St. John, John and Sullivan, Jeremy and Zhu, Kevin and Zynda, Greg and Balsam, Daniel and Collison, Patrick and Costa, Anthony B. and Hernandez-Boussard, Tina and Ho, Eric and Liu, Ming-Yu and McGrath, Tom and Powell, Kimberly and Burke, Dave P. and Goodarzi, Hani and Hsu, Patrick D and Hie, Brian},
title = {Genome modeling and design across all domains of life with Evo 2},
elocation-id = {2025.02.18.638918},
year = {2025},
doi = {10.1101/2025.02.18.638918},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2025/02/21/2025.02.18.638918},
eprint = {https://www.biorxiv.org/content/early/2025/02/21/2025.02.18.638918.full.pdf},
journal = {bioRxiv}
}
``` | {
"source": "ArcInstitute/evo2",
"title": "README.md",
"url": "https://github.com/ArcInstitute/evo2/blob/main/README.md",
"date": "2025-02-13T23:19:47",
"stars": 2116,
"description": "Genome modeling and design across all domains of life",
"file_size": 8382
} |
# Development Guidelines
This document contains critical information about working with this codebase. Follow these guidelines precisely.
## Core Development Rules
1. Package Management
- ONLY use uv, NEVER pip
- Installation: `uv add package`
- Running tools: `uv run tool`
- Upgrading: `uv add --dev package --upgrade-package package`
- FORBIDDEN: `uv pip install`, `@latest` syntax
2. Code Quality
- Type hints required for all code
- Public APIs must have docstrings
- Functions must be focused and small
- Follow existing patterns exactly
- Line length: 88 chars maximum
3. Testing Requirements
- Framework: `uv run pytest`
- Async testing: use anyio, not asyncio
- Coverage: test edge cases and errors
- New features require tests
- Bug fixes require regression tests
- For commits fixing bugs or adding features based on user reports add:
```bash
git commit --trailer "Reported-by:<name>"
```
Where `<name>` is the name of the user.
- For commits related to a Github issue, add
```bash
git commit --trailer "Github-Issue:#<number>"
```
- NEVER ever mention a `co-authored-by` or similar aspects. In particular, never
mention the tool used to create the commit message or PR.
## Pull Requests
- Create a detailed message of what changed. Focus on the high level description of
the problem it tries to solve, and how it is solved. Don't go into the specifics of the
code unless it adds clarity.
- Always add `jerome3o-anthropic` and `jspahrsummers` as reviewer.
- NEVER ever mention a `co-authored-by` or similar aspects. In particular, never
mention the tool used to create the commit message or PR.
## Python Tools
## Code Formatting
1. Ruff
- Format: `uv run ruff format .`
- Check: `uv run ruff check .`
- Fix: `uv run ruff check . --fix`
- Critical issues:
- Line length (88 chars)
- Import sorting (I001)
- Unused imports
- Line wrapping:
- Strings: use parentheses
- Function calls: multi-line with proper indent
- Imports: split into multiple lines
2. Type Checking
- Tool: `uv run pyright`
- Requirements:
- Explicit None checks for Optional
- Type narrowing for strings
- Version warnings can be ignored if checks pass
3. Pre-commit
- Config: `.pre-commit-config.yaml`
- Runs: on git commit
- Tools: Prettier (YAML/JSON), Ruff (Python)
- Ruff updates:
- Check PyPI versions
- Update config rev
- Commit config first
## Error Resolution
1. CI Failures
- Fix order:
1. Formatting
2. Type errors
3. Linting
- Type errors:
- Get full line context
- Check Optional types
- Add type narrowing
- Verify function signatures
2. Common Issues
- Line length:
- Break strings with parentheses
- Multi-line function calls
- Split imports
- Types:
- Add None checks
- Narrow string types
- Match existing patterns
3. Best Practices
- Check git status before commits
- Run formatters before type checks
- Keep changes minimal
- Follow existing patterns
- Document public APIs
- Test thoroughly | {
"source": "modelcontextprotocol/python-sdk",
"title": "CLAUDE.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/CLAUDE.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 3168
} |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
[email protected].
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations. | {
"source": "modelcontextprotocol/python-sdk",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 5222
} |
# Contributing
Thank you for your interest in contributing to the MCP Python SDK! This document provides guidelines and instructions for contributing.
## Development Setup
1. Make sure you have Python 3.10+ installed
2. Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
3. Fork the repository
4. Clone your fork: `git clone https://github.com/YOUR-USERNAME/python-sdk.git`
5. Install dependencies:
```bash
uv sync --frozen --all-extras --dev
```
## Development Workflow
1. Choose the correct branch for your changes:
- For bug fixes to a released version: use the latest release branch (e.g. v1.1.x for 1.1.3)
- For new features: use the main branch (which will become the next minor/major version)
- If unsure, ask in an issue first
2. Create a new branch from your chosen base branch
3. Make your changes
4. Ensure tests pass:
```bash
uv run pytest
```
5. Run type checking:
```bash
uv run pyright
```
6. Run linting:
```bash
uv run ruff check .
uv run ruff format .
```
7. Submit a pull request to the same branch you branched from
## Code Style
- We use `ruff` for linting and formatting
- Follow PEP 8 style guidelines
- Add type hints to all functions
- Include docstrings for public APIs
## Pull Request Process
1. Update documentation as needed
2. Add tests for new functionality
3. Ensure CI passes
4. Maintainers will review your code
5. Address review feedback
## Code of Conduct
Please note that this project is released with a [Code of Conduct](CODE_OF_CONDUCT.md). By participating in this project you agree to abide by its terms.
## License
By contributing, you agree that your contributions will be licensed under the MIT License. | {
"source": "modelcontextprotocol/python-sdk",
"title": "CONTRIBUTING.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/CONTRIBUTING.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 1695
} |
# MCP Python SDK
<div align="center">
<strong>Python implementation of the Model Context Protocol (MCP)</strong>
[![PyPI][pypi-badge]][pypi-url]
[![MIT licensed][mit-badge]][mit-url]
[![Python Version][python-badge]][python-url]
[![Documentation][docs-badge]][docs-url]
[![Specification][spec-badge]][spec-url]
[![GitHub Discussions][discussions-badge]][discussions-url]
</div>
<!-- omit in toc -->
## Table of Contents
- [Overview](#overview)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [What is MCP?](#what-is-mcp)
- [Core Concepts](#core-concepts)
- [Server](#server)
- [Resources](#resources)
- [Tools](#tools)
- [Prompts](#prompts)
- [Images](#images)
- [Context](#context)
- [Running Your Server](#running-your-server)
- [Development Mode](#development-mode)
- [Claude Desktop Integration](#claude-desktop-integration)
- [Direct Execution](#direct-execution)
- [Examples](#examples)
- [Echo Server](#echo-server)
- [SQLite Explorer](#sqlite-explorer)
- [Advanced Usage](#advanced-usage)
- [Low-Level Server](#low-level-server)
- [Writing MCP Clients](#writing-mcp-clients)
- [MCP Primitives](#mcp-primitives)
- [Server Capabilities](#server-capabilities)
- [Documentation](#documentation)
- [Contributing](#contributing)
- [License](#license)
[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
[pypi-url]: https://pypi.org/project/mcp/
[mit-badge]: https://img.shields.io/pypi/l/mcp.svg
[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
[python-url]: https://www.python.org/downloads/
[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
[docs-url]: https://modelcontextprotocol.io
[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
[spec-url]: https://spec.modelcontextprotocol.io
[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
## Overview
The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio and SSE
- Handle all MCP protocol messages and lifecycle events
## Installation
We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects:
```bash
uv add "mcp[cli]"
```
Alternatively:
```bash
pip install mcp
```
## Quickstart
Let's create a simple MCP server that exposes a calculator tool and some data:
```python
# server.py
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
```
You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
```bash
mcp install server.py
```
Alternatively, you can test it with the MCP Inspector:
```bash
mcp dev server.py
```
## What is MCP?
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
- And more!
## Core Concepts
### Server
The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
```python
# Add lifespan support for startup/shutdown with strong typing
from dataclasses import dataclass
from typing import AsyncIterator
from mcp.server.fastmcp import FastMCP
# Create a named server
mcp = FastMCP("My App")
# Specify dependencies for deployment and development
mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
@dataclass
class AppContext:
db: Database # Replace with your actual DB type
@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
"""Manage application lifecycle with type-safe context"""
try:
# Initialize on startup
await db.connect()
yield AppContext(db=db)
finally:
# Cleanup on shutdown
await db.disconnect()
# Pass lifespan to server
mcp = FastMCP("My App", lifespan=app_lifespan)
# Access type-safe lifespan context in tools
@mcp.tool()
def query_db(ctx: Context) -> str:
"""Tool that uses initialized resources"""
db = ctx.request_context.lifespan_context["db"]
return db.query()
```
### Resources
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
```python
@mcp.resource("config://app")
def get_config() -> str:
"""Static configuration data"""
return "App configuration here"
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: str) -> str:
"""Dynamic user data"""
return f"Profile data for user {user_id}"
```
### Tools
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
```python
@mcp.tool()
def calculate_bmi(weight_kg: float, height_m: float) -> float:
"""Calculate BMI given weight in kg and height in meters"""
return weight_kg / (height_m ** 2)
@mcp.tool()
async def fetch_weather(city: str) -> str:
"""Fetch current weather for a city"""
async with httpx.AsyncClient() as client:
response = await client.get(f"https://api.weather.com/{city}")
return response.text
```
### Prompts
Prompts are reusable templates that help LLMs interact with your server effectively:
```python
@mcp.prompt()
def review_code(code: str) -> str:
return f"Please review this code:\n\n{code}"
@mcp.prompt()
def debug_error(error: str) -> list[Message]:
return [
UserMessage("I'm seeing this error:"),
UserMessage(error),
AssistantMessage("I'll help debug that. What have you tried so far?")
]
```
### Images
FastMCP provides an `Image` class that automatically handles image data:
```python
from mcp.server.fastmcp import FastMCP, Image
from PIL import Image as PILImage
@mcp.tool()
def create_thumbnail(image_path: str) -> Image:
"""Create a thumbnail from an image"""
img = PILImage.open(image_path)
img.thumbnail((100, 100))
return Image(data=img.tobytes(), format="png")
```
### Context
The Context object gives your tools and resources access to MCP capabilities:
```python
from mcp.server.fastmcp import FastMCP, Context
@mcp.tool()
async def long_task(files: list[str], ctx: Context) -> str:
"""Process multiple files with progress tracking"""
for i, file in enumerate(files):
ctx.info(f"Processing {file}")
await ctx.report_progress(i, len(files))
data, mime_type = await ctx.read_resource(f"file://{file}")
return "Processing complete"
```
## Running Your Server
### Development Mode
The fastest way to test and debug your server is with the MCP Inspector:
```bash
mcp dev server.py
# Add dependencies
mcp dev server.py --with pandas --with numpy
# Mount local code
mcp dev server.py --with-editable .
```
### Claude Desktop Integration
Once your server is ready, install it in Claude Desktop:
```bash
mcp install server.py
# Custom name
mcp install server.py --name "My Analytics Server"
# Environment variables
mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
mcp install server.py -f .env
```
### Direct Execution
For advanced scenarios like custom deployments:
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My App")
if __name__ == "__main__":
mcp.run()
```
Run it with:
```bash
python server.py
# or
mcp run server.py
```
## Examples
### Echo Server
A simple server demonstrating resources, tools, and prompts:
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Echo")
@mcp.resource("echo://{message}")
def echo_resource(message: str) -> str:
"""Echo a message as a resource"""
return f"Resource echo: {message}"
@mcp.tool()
def echo_tool(message: str) -> str:
"""Echo a message as a tool"""
return f"Tool echo: {message}"
@mcp.prompt()
def echo_prompt(message: str) -> str:
"""Create an echo prompt"""
return f"Please process this message: {message}"
```
### SQLite Explorer
A more complex example showing database integration:
```python
from mcp.server.fastmcp import FastMCP
import sqlite3
mcp = FastMCP("SQLite Explorer")
@mcp.resource("schema://main")
def get_schema() -> str:
"""Provide the database schema as a resource"""
conn = sqlite3.connect("database.db")
schema = conn.execute(
"SELECT sql FROM sqlite_master WHERE type='table'"
).fetchall()
return "\n".join(sql[0] for sql in schema if sql[0])
@mcp.tool()
def query_data(sql: str) -> str:
"""Execute SQL queries safely"""
conn = sqlite3.connect("database.db")
try:
result = conn.execute(sql).fetchall()
return "\n".join(str(row) for row in result)
except Exception as e:
return f"Error: {str(e)}"
```
## Advanced Usage
### Low-Level Server
For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
```python
from contextlib import asynccontextmanager
from typing import AsyncIterator
@asynccontextmanager
async def server_lifespan(server: Server) -> AsyncIterator[dict]:
"""Manage server startup and shutdown lifecycle."""
try:
# Initialize resources on startup
await db.connect()
yield {"db": db}
finally:
# Clean up on shutdown
await db.disconnect()
# Pass lifespan to server
server = Server("example-server", lifespan=server_lifespan)
# Access lifespan context in handlers
@server.call_tool()
async def query_db(name: str, arguments: dict) -> list:
ctx = server.request_context
db = ctx.lifespan_context["db"]
return await db.query(arguments["query"])
```
The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers
```python
from mcp.server.lowlevel import Server, NotificationOptions
from mcp.server.models import InitializationOptions
import mcp.server.stdio
import mcp.types as types
# Create a server instance
server = Server("example-server")
@server.list_prompts()
async def handle_list_prompts() -> list[types.Prompt]:
return [
types.Prompt(
name="example-prompt",
description="An example prompt template",
arguments=[
types.PromptArgument(
name="arg1",
description="Example argument",
required=True
)
]
)
]
@server.get_prompt()
async def handle_get_prompt(
name: str,
arguments: dict[str, str] | None
) -> types.GetPromptResult:
if name != "example-prompt":
raise ValueError(f"Unknown prompt: {name}")
return types.GetPromptResult(
description="Example prompt",
messages=[
types.PromptMessage(
role="user",
content=types.TextContent(
type="text",
text="Example prompt text"
)
)
]
)
async def run():
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="example",
server_version="0.1.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
)
)
)
if __name__ == "__main__":
import asyncio
asyncio.run(run())
```
### Writing MCP Clients
The SDK provides a high-level client interface for connecting to MCP servers:
```python
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="python", # Executable
args=["example_server.py"], # Optional command line arguments
env=None # Optional environment variables
)
# Optional: create a sampling callback
async def handle_sampling_message(message: types.CreateMessageRequestParams) -> types.CreateMessageResult:
return types.CreateMessageResult(
role="assistant",
content=types.TextContent(
type="text",
text="Hello, world! from model",
),
model="gpt-3.5-turbo",
stopReason="endTurn",
)
async def run():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:
# Initialize the connection
await session.initialize()
# List available prompts
prompts = await session.list_prompts()
# Get a prompt
prompt = await session.get_prompt("example-prompt", arguments={"arg1": "value"})
# List available resources
resources = await session.list_resources()
# List available tools
tools = await session.list_tools()
# Read a resource
content, mime_type = await session.read_resource("file://some/path")
# Call a tool
result = await session.call_tool("tool-name", arguments={"arg1": "value"})
if __name__ == "__main__":
import asyncio
asyncio.run(run())
```
### MCP Primitives
The MCP protocol defines three core primitives that servers can implement:
| Primitive | Control | Description | Example Use |
|-----------|-----------------------|-----------------------------------------------------|------------------------------|
| Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
| Resources | Application-controlled| Contextual data managed by the client application | File contents, API responses |
| Tools | Model-controlled | Functions exposed to the LLM to take actions | API calls, data updates |
### Server Capabilities
MCP servers declare capabilities during initialization:
| Capability | Feature Flag | Description |
|-------------|------------------------------|------------------------------------|
| `prompts` | `listChanged` | Prompt template management |
| `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates |
| `tools` | `listChanged` | Tool discovery and execution |
| `logging` | - | Server logging configuration |
| `completion`| - | Argument completion suggestions |
## Documentation
- [Model Context Protocol documentation](https://modelcontextprotocol.io)
- [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
- [Officially supported servers](https://github.com/modelcontextprotocol/servers)
## Contributing
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
## License
This project is licensed under the MIT License - see the LICENSE file for details. | {
"source": "modelcontextprotocol/python-sdk",
"title": "README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 16761
} |
# Release Process
## Bumping Dependencies
1. Change dependency
2. Upgrade lock with `uv lock --resolution lowest-direct
## Major or Minor Release
1. Create a release branch named `vX.Y.Z` where `X.Y.Z` is the version.
2. Bump version number on release branch.
3. Create an annotated, signed tag: `git tag -s -a vX.Y.Z`
4. Create a github release using `gh release create` and publish it.
5. Have the release flow being reviewed.
7. Bump version number on `main` to the next version followed by `.dev`, e.g. `v0.4.0.dev`. | {
"source": "modelcontextprotocol/python-sdk",
"title": "RELEASE.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/RELEASE.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 524
} |
# Security Policy
Thank you for helping us keep the SDKs and systems they interact with secure.
## Reporting Security Issues
This SDK is maintained by [Anthropic](https://www.anthropic.com/) as part of the Model Context Protocol project.
The security of our systems and user data is Anthropic’s top priority. We appreciate the work of security researchers acting in good faith in identifying and reporting potential vulnerabilities.
Our security program is managed on HackerOne and we ask that any validated vulnerability in this functionality be reported through their [submission form](https://hackerone.com/anthropic-vdp/reports/new?type=team&report_type=vulnerability).
## Vulnerability Disclosure Program
Our Vulnerability Program Guidelines are defined on our [HackerOne program page](https://hackerone.com/anthropic-vdp). | {
"source": "modelcontextprotocol/python-sdk",
"title": "SECURITY.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/SECURITY.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 834
} |
# Python SDK Examples
This folders aims to provide simple examples of using the Python SDK. Please refer to the
[servers repository](https://github.com/modelcontextprotocol/servers)
for real-world servers. | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 206
} |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. | {
"source": "modelcontextprotocol/python-sdk",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 833
} |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. | {
"source": "modelcontextprotocol/python-sdk",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 594
} |
# MCP Simple Prompt
A simple MCP server that exposes a customizable prompt template with optional context and topic parameters.
## Usage
Start the server using either stdio (default) or SSE transport:
```bash
# Using stdio transport (default)
uv run mcp-simple-prompt
# Using SSE transport on custom port
uv run mcp-simple-prompt --transport sse --port 8000
```
The server exposes a prompt named "simple" that accepts two optional arguments:
- `context`: Additional context to consider
- `topic`: Specific topic to focus on
## Example
Using the MCP client, you can retrieve the prompt like this using the STDIO transport:
```python
import asyncio
from mcp.client.session import ClientSession
from mcp.client.stdio import StdioServerParameters, stdio_client
async def main():
async with stdio_client(
StdioServerParameters(command="uv", args=["run", "mcp-simple-prompt"])
) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available prompts
prompts = await session.list_prompts()
print(prompts)
# Get the prompt with arguments
prompt = await session.get_prompt(
"simple",
{
"context": "User is a software developer",
"topic": "Python async programming",
},
)
print(prompt)
asyncio.run(main())
``` | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/servers/simple-prompt/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/servers/simple-prompt/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 1472
} |
# MCP Simple Resource
A simple MCP server that exposes sample text files as resources.
## Usage
Start the server using either stdio (default) or SSE transport:
```bash
# Using stdio transport (default)
uv run mcp-simple-resource
# Using SSE transport on custom port
uv run mcp-simple-resource --transport sse --port 8000
```
The server exposes some basic text file resources that can be read by clients.
## Example
Using the MCP client, you can retrieve resources like this using the STDIO transport:
```python
import asyncio
from mcp.types import AnyUrl
from mcp.client.session import ClientSession
from mcp.client.stdio import StdioServerParameters, stdio_client
async def main():
async with stdio_client(
StdioServerParameters(command="uv", args=["run", "mcp-simple-resource"])
) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available resources
resources = await session.list_resources()
print(resources)
# Get a specific resource
resource = await session.read_resource(AnyUrl("file:///greeting.txt"))
print(resource)
asyncio.run(main())
``` | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/servers/simple-resource/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/servers/simple-resource/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 1225
} |
A simple MCP server that exposes a website fetching tool.
## Usage
Start the server using either stdio (default) or SSE transport:
```bash
# Using stdio transport (default)
uv run mcp-simple-tool
# Using SSE transport on custom port
uv run mcp-simple-tool --transport sse --port 8000
```
The server exposes a tool named "fetch" that accepts one required argument:
- `url`: The URL of the website to fetch
## Example
Using the MCP client, you can use the tool like this using the STDIO transport:
```python
import asyncio
from mcp.client.session import ClientSession
from mcp.client.stdio import StdioServerParameters, stdio_client
async def main():
async with stdio_client(
StdioServerParameters(command="uv", args=["run", "mcp-simple-tool"])
) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available tools
tools = await session.list_tools()
print(tools)
# Call the fetch tool
result = await session.call_tool("fetch", {"url": "https://example.com"})
print(result)
asyncio.run(main())
``` | {
"source": "modelcontextprotocol/python-sdk",
"title": "examples/servers/simple-tool/README.md",
"url": "https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/servers/simple-tool/README.md",
"date": "2024-09-24T21:01:35",
"stars": 2103,
"description": "The official Python SDK for Model Context Protocol servers and clients",
"file_size": 1168
} |
<div align="center">
<img src="https://raw.githubusercontent.com/zml/zml.github.io/refs/heads/main/docs-assets/zml-banner.png" style="width:100%; height:120px;">
<a href="https://zml.ai">Website</a>
| <a href="#getting-started">Getting Started</a>
| <a href="https://docs.zml.ai">Documentation</a>
| <a href="https://discord.gg/6y72SN2E7H">Discord</a>
| <a href="./CONTRIBUTING.md">Contributing</a>
</div>
[ZML]: https://zml.ai/
[Getting Started]: #getting-started
[Documentation]: https://docs.zml.ai
[Contributing]: ./CONTRIBUTING.md
[Discord]: https://discord.gg/6y72SN2E7H
# Bonjour 👋
At ZML, we are creating exciting AI products on top of our high-performance
AI inference stack. Our stack is built for production, using the amazing
[Zig](https://ziglang.org) language, [MLIR](https://mlir.llvm.org), and the
power of [Bazel](https://bazel.build).
<div align="center">
<div>Take me straight to <a href="#getting-started">getting started</a> or <a href="#a-taste-of-zml">give me a taste</a> 🥐!</div>
</div>
---
# We're happy to share!
We're very happy to share our inference stack with the World and hope it allows
you, too, to build cool and exciting AI projects.
To give you a glimpse of what you can do with ZML, here is an early demo:
<div align="center"><img src="https://zml.ai/docs-assets/ZML.gif" style="width:75%"></div>
It shows a prototype running a LLaMA2 model sharded on 1 NVIDIA RTX 4090, 1 AMD
6800XT, and 1 Google Cloud TPU v2. All accelerators were hosted in different
locations, with activations being passed over a VPN.
All processes used the same model code, cross-compiled on a Mac, and copied onto
the servers.
For more inspiration, see also the examples below or check out the
[examples](./examples) folder.
# Getting started
## Prerequisites
We use `bazel` to build ZML and its dependencies. The only prerequisite is
`bazel`, which we recommend to download through `bazelisk`, a version manager
for `bazel`.
**Please note: If you do not wish to install `bazel`** system-wide, we provide
[examples/bazel.sh](examples/bazel.sh) which downloads it to your home folder
and runs it.
**Install Bazel** (recommended):
<details><summary>
### macOS
</summary>
```
brew install bazelisk
```
</details>
<details><summary>
### Linux
</summary>
```
curl -L -o /usr/local/bin/bazel 'https://github.com/bazelbuild/bazelisk/releases/download/v1.25.0/bazelisk-linux-amd64'
chmod +x /usr/local/bin/bazel
```
</details>
## Run a pre-packaged model
We have implemented a variety of example models in ZML. See our reference
implementations in the
[examples](https://github.com/zml/zml/tree/master/examples/) folder.
### MNIST
The [classic](https://en.wikipedia.org/wiki/MNIST_database) handwritten digits
recognition task. The model is tasked to recognize a handwritten digit, which
has been converted to a 28x28 pixel monochrome image. `Bazel` will download a
pre-trained model, and the test dataset. The program will load the model,
compile it, and classify a randomly picked example from the test dataset.
On the command line:
```
cd examples
bazel run -c opt //mnist
# or
./bazel.sh run -c opt //mnist
```
### Meta Llama 3.1 8B
This model has restrictions, see
[here](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It **requires
approval from Meta on Huggingface**, which can take a few hours to get granted.
While waiting, you can already generate an access token to log into HuggingFace
from `bazel`; see [here](./docs/huggingface-access-token.md).
Once you've been granted access, you're ready to download a gated model like
`Meta-Llama-3.1-8B-Instruct`!
```
# requires token in $HOME/.cache/huggingface/token, as created by the
# `huggingface-cli login` command, or the `HUGGINGFACE_TOKEN` environment variable.
cd examples
bazel run -c opt //llama:Llama-3.1-8B-Instruct
bazel run -c opt //llama:Llama-3.1-8B-Instruct -- --prompt="What is the capital of France?"
```
You can also try `Llama-3.1-70B-Instruct` if you have enough memory.
### Meta Llama 3.2 1B
Like the 8B model above, this model also requires approval. See
[here](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for access requirements.
```
cd examples
bazel run -c opt //llama:Llama-3.2-1B-Instruct
bazel run -c opt //llama:Llama-3.2-1B-Instruct -- --prompt="What is the capital of France?"
```
For a larger 3.2 model, you can also try `Llama-3.2-3B-Instruct`.
## Running Models on GPU / TPU
You can compile models for accelerator runtimes by appending one or more of the
following arguments to the command line when compiling / running a model:
- NVIDIA CUDA: `--@zml//runtimes:cuda=true`
- AMD RoCM: `--@zml//runtimes:rocm=true`
- Google TPU: `--@zml//runtimes:tpu=true`
- AWS Trainium/Inferentia 2: `--@zml//runtimes:neuron=true`
- **AVOID CPU:** `--@zml//runtimes:cpu=false`
The latter, avoiding compilation for CPU, cuts down compilation time.
So, to run the OpenLLama model from above on your host sporting an NVIDIA GPU,
run the following:
```
cd examples
bazel run -c opt //llama:Llama-3.2-1B-Instruct \
--@zml//runtimes:cuda=true \
-- --prompt="What is the capital of France?"
```
## Run Tests
```
bazel test //zml:test
```
# A taste of ZML
## MNIST
```zig
const std = @import("std");
const zml = @import("zml");
/// Model definition
const Mnist = struct {
fc1: Layer,
fc2: Layer,
const Layer = struct {
weight: zml.Tensor,
bias: zml.Tensor,
pub fn forward(self: Layer, input: zml.Tensor) zml.Tensor {
return self.weight.matmul(input).add(self.bias).relu();
}
};
/// just two linear layers + relu activation
pub fn forward(self: Mnist, input: zml.Tensor) zml.Tensor {
std.log.info("Compiling for target: {s}", .{@tagName(input.getContext().target())});
var x = input.flattenAll().convert(.f32);
const layers: []const Layer = &.{ self.fc1, self.fc2 };
for (layers) |layer| {
x = zml.call(layer, .forward, .{x});
}
return x.argMax(0, .u8).indices;
}
};
```
## Tagged Tensors
```zig
const Sdpa = struct {
pub fn forward(_: Sdpa, ctx: *zml.Context, q_: zml.Tensor, k_: zml.Tensor, v_: zml.Tensor) zml.Tensor {
const q = q_.withTags(.{ .b, .h, .q, .hd });
const k = k_.withTags(.{ .b, .h, .k, .hd });
const v = v_.withTags(.{ .b, .h, .k, .hd });
const attn_mask = zml.nn.causalAttnMask(ctx, .{ .q = q.dim(.q), .k = k.dim(.k) }, q.dtype(), null);
return zml.nn.sdpa(ctx, q, k, v, .{ .attn_mask = attn_mask });
}
};
```
# Where to go next:
You might want to check out more [examples](./examples), read through the
[documentation directly on GitHub](./docs/README.md), or, for the full rendering
experience, browse the
[online documentation with included API reference](https://docs.zml.ai).
# Contributing
See [here][Contributing].
# License
ZML is licensed under the [Apache 2.0 license](./LICENSE).
# Thanks to our contributors
<a href="https://github.com/zml/zml/graphs/contributors">
<img src="https://contrib.rocks/image?repo=zml/zml" />
</a> | {
"source": "zml/zml",
"title": "README.md",
"url": "https://github.com/zml/zml/blob/master/README.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 7197
} |
### Greetings, Programs!
If you want to run our example models, please head over to
[Getting Started](./tutorials/getting_started.md).
Ready to write some code? Try starting with [your first model in ZML](./tutorials/write_first_model.md), or familiarize yourself with the high-level [ZML concepts](./learn/concepts.md) first.
### MENU:
## Tutorials
- [Getting Started](./tutorials/getting_started.md) : **install ZML, run LLama**
- [Writing your first model](./tutorials/write_first_model.md)
- [Simplifying Dimension Handling](./tutorials/working_with_tensors.md) - **Tagged Tensors!**
## How to ?
- [HuggingFace Authentication](./howtos/huggingface_access_token.md)
- [Port Pytorch models to ZML](./howtos/howto_torch2zml.md)
- [Add Weights Files to your projects](./howtos/add_weights.md)
- [Cross Compile and Deploy Models on a Server](./howtos/deploy_on_server.md)
- [Dockerize Models](./howtos/dockerize_models.md)
## Learn more...
- [ZML Concepts](./learn/concepts.md) : **Tensors, Models, Executables, etc. explained**
## Contribute
- [Style Guide](./misc/style_guide.md) | {
"source": "zml/zml",
"title": "docs/README.md",
"url": "https://github.com/zml/zml/blob/master/docs/README.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 1091
} |
# Running Gated Huggingface Models with Token Authentication
Some models have restrictions and may require some sort of approval or agreement
process, which, by consequence, **requires token-authentication with Huggingface**.
The easiest way might be to use the `huggingface-cli login` command.
Alternatively, here is how you can generate a **"read-only public repositories"**
access token to log into your account on Huggingface, directly from `bazel`, in order to download models.
* log in at [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
* click on "Create new token"
* give the token a name, eg `zml_public_repos`,
* under _Repositories_, grant the following permission: "Read access to contents of all public gated repos you can access".
* at the bottom click on "Create token".
* copy the token by clicking `Copy`. **You won't be able to see it again.**
* the token looks something like `hf_abCdEfGhijKlM`.
* store the token on your machine (replace the placeholder with your actual token):
You can use the `HUGGINGFACE_TOKEN` environment variable to store the token or use
its standard location:
```
mkdir -p $HOME/.cache/huggingface/; echo <hf_my_token> > "$HOME/.cache/huggingface/token"
```
Now you're ready to download a gated model like `Meta-Llama-3-8b`!
**Example:**
```
# requires token in $HOME/.cache/huggingface/token, as created by the
# `huggingface-cli login` command, or the `HUGGINGFACE_TOKEN` environment variable.
cd examples
bazel run -c opt //llama:Meta-Llama-3-8b
bazel run -c opt //llama:Meta-Llama-3-8b -- --promt="Once upon a time,"
``` | {
"source": "zml/zml",
"title": "docs/huggingface-access-token.md",
"url": "https://github.com/zml/zml/blob/master/docs/huggingface-access-token.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 1611
} |
# Adding Weights Files
Our [first model](../tutorials/write_first_model.md) did not need any weights files.
We just created weights and biases at runtime.
But real-world models typically need weights files, and maybe some other
supporting files.
We recommend, for easy deployments, you upload those files. In many instances,
you will use a site like [🤗 Hugging Face](https://huggingface.co).
We also recommend to add a `weights.bzl` file to your project root directory, so
you don't "pollute" your build file with long URLs and SHAs:
```python
load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_file")
def _weights_impl(mctx):
http_file(
name = "com_github_zml_cdn_mnist",
downloaded_file_path = "mnist.pt",
sha256 = "d8a25252e28915e147720c19223721f0f53e3317493727ca754a2dd672450ba9",
url = "https://github.com/ggerganov/ggml/raw/18703ad600cc68dbdb04d57434c876989a841d12/examples/mnist/models/mnist/mnist_model.state_dict",
)
http_file(
name = "com_github_zml_cdn_mnist_data",
downloaded_file_path = "mnist.ylc",
sha256 = "0fa7898d509279e482958e8ce81c8e77db3f2f8254e26661ceb7762c4d494ce7",
url = "https://github.com/ggerganov/ggml/raw/18703ad600cc68dbdb04d57434c876989a841d12/examples/mnist/models/mnist/t10k-images.idx3-ubyte",
)
return mctx.extension_metadata(
reproducible = True,
root_module_direct_deps = "all",
root_module_direct_dev_deps = [],
)
weights = module_extension(
implementation = _weights_impl,
)
```
The above `weights.bzl` shows how we load files for MNIST:
- `mnist.pt` (model weights)
- `mnist.ylc` (dataset for picking sample images)
Then, in your `BUILD.bazel`, you can refer to the files you defined above, in
the following way:
```python
zig_cc_binary(
name = "mnist",
args = [
"$(location @com_github_zml_cdn_mnist//file)",
"$(location @com_github_zml_cdn_mnist_data//file)",
],
data = [
"@com_github_zml_cdn_mnist//file",
"@com_github_zml_cdn_mnist_data//file",
],
main = "mnist.zig",
deps = [
"//async",
"//zml",
],
)
```
See how:
- we use `data = [ ... ]` to reference the files in `weights.bzl`
- we use `args = [ ... ]` to pass the files as command-line arguments to the
MNIST executable at runtime, automatically. | {
"source": "zml/zml",
"title": "docs/howtos/add_weights.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/add_weights.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 2368
} |
# Deploying Models on a Server
To run models on remote GPU/TPU machines, it is inconvenient to have to check
out your project’s repository and compile it on every target. Instead, you more
likely want to cross-compile right from your development machine, **for every**
supported target architecture and accelerator.
See [Getting Started with ZML](../tutorials/getting_started.md) if you need more
information on how to compile a model.
**Here's a quick recap:**
You can compile models for accelerator runtimes by appending one or more of the
following arguments to the command line when compiling / running a model:
- NVIDIA CUDA: `--@zml//runtimes:cuda=true`
- AMD RoCM: `--@zml//runtimes:rocm=true`
- Google TPU: `--@zml//runtimes:tpu=true`
- AWS Trainium/Inferentia 2: `--@zml//runtimes:neuron=true`
- **AVOID CPU:** `--@zml//runtimes:cpu=false`
So, to run the OpenLLama model from above **on your development machine**
housing an NVIDIA GPU, run the following:
```
cd examples
bazel run -c opt //llama:OpenLLaMA-3B --@zml//runtimes:cuda=true
```
## Cross-Compiling and creating a TAR for your server
Currently, ZML lets you cross-compile to one of the following target
architectures:
- Linux X86_64: `--platforms=@zml//platforms:linux_amd64`
- Linux ARM64: `--platforms=@zml//platforms:linux_arm64`
- MacOS ARM64: `--platforms=@zml//platforms:macos_arm64`
As an example, here is how you build above OpenLLama for CUDA on Linux X86_64:
```
cd examples
bazel build -c opt //llama:OpenLLaMA-3B \
--@zml//runtimes:cuda=true \
--@zml//runtimes:cpu=false \
--platforms=@zml//platforms:linux_amd64
```
### Creating the TAR
When cross-compiling, it is convenient to produce a compressed TAR file that
you can copy to the target host, so you can unpack it there and run the model.
Let's use MNIST as example.
If not present already, add an "archive" target to the model's `BUILD.bazel`,
like this:
```python
load("@aspect_bazel_lib//lib:tar.bzl", "mtree_spec", "tar")
# Manifest, required for building the tar archive
mtree_spec(
name = "mtree",
srcs = [":mnist"],
)
# Create a tar archive from the above manifest
tar(
name = "archive",
srcs = [":mnist"],
args = [
"--options",
"zstd:compression-level=9",
],
compress = "zstd",
mtree = ":mtree",
)
```
... and then build the TAR archive:
```
# cd examples
bazel build -c opt //mnist:archive \
--@zml//runtimes:cuda=true \
--@zml//runtimes:cpu=false \
--platforms=@zml//platforms:linux_amd64
```
Note the `//mnist:archive` notation.
The resulting tar file will be in `bazel-bin/mnist/archive.tar.zst`.
### Run it on the server
You can copy the TAR archive onto your Linux X86_64 NVIDIA GPU server, untar
and run it:
```bash
# on your machine
scp bazel-bin/mnist/archive.tar.zst destination-server:
ssh destination-server # to enter the server
# ... on the server
tar xvf archive.tar.zst
./mnist \
'mnist.runfiles/_main~_repo_rules~com_github_ggerganov_ggml_mnist/file/mnist.pt' \
'mnist.runfiles/_main~_repo_rules~com_github_ggerganov_ggml_mnist_data/file/mnist.ylc'
```
The easiest way to figure out the commandline arguments of an example model is
to consult the model's `BUILD.bazel` and check out its `args` section. It will
reference e.g. weights files that are defined either in the same `BUILD.bazel`
file or in a `weights.bzl` file.
You can also consult the console output when running your model locally:
```bash
bazel run //mnist
INFO: Analyzed target //mnist:mnist (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //mnist:mnist up-to-date:
bazel-bin/mnist/mnist
INFO: Elapsed time: 0.302s, Critical Path: 0.00s
INFO: 3 processes: 3 internal.
INFO: Build completed successfully, 3 total actions
INFO: Running command line: bazel-bin/mnist/mnist ../_main~_repo_rules~com_github_ggerganov_ggml_mnist/file/mnist.pt ../_main~_repo_rules~com_github_ggerganov_ggml_mnist_data/file/mnist.ylc
# ...
```
You see the command line right up there. On the server, you just need to replace
`../` with the 'runfiles' directory of your TAR. | {
"source": "zml/zml",
"title": "docs/howtos/deploy_on_server.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/deploy_on_server.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 4261
} |
# Containerize a Model
A convenient way of [deploying a model](../howtos/deploy_on_server.md) is packaging
it up in a Docker container. Thanks to bazel, this is really easy to do. You
just have to append a few lines to your model's `BUILD.bazel`. Here is how it's
done.
**Note:** This walkthrough will work with your installed container runtime, no
matter if it's **Docker or e.g. Podman.** Also, we'll create images in the
[OCI](https://github.com/opencontainers/image-spec) open image format.
Let's try containerizing our [first model](../tutorials/write_first_model.md), as it
doesn't need any additional weights files. We'll see [down below](#adding-weights-and-data)
how to add those. We'll also see how to add GPU/TPU support for our container
there.
Bazel creates images from `.TAR` archives.
The steps required for containerization are:
1. Let bazel create a MANIFEST for the tar file to come.
2. Let bazel create a TAR archive of everything needed for the model to run.
- see also: [Deploying Models on a Server](../howtos/deploy_on_server.md), where
we prepare a TAR file, and copy it to and run it on a remote GPU server.
3. Let bazel create a container image for Linux X86_64.
4. Let bazel load the image _(OPTIONAL)_.
5. Let bazel push the image straight to the Docker registry.
6. Let bazel [add weights and data](#adding-weights-and-data), GPU/TPU support
_(OPTIONAL)_.
**Note:** every TAR archive we create (one in this example) becomes its own
layer in the container image.
## Dockerizing our first model
We need to add a few "imports" at the beginning of our `BUILD.bazel` so we can
use their rules to define our 5 additional targets:
```python
load("@aspect_bazel_lib//lib:tar.bzl", "mtree_spec", "tar")
load("@aspect_bazel_lib//lib:transitions.bzl", "platform_transition_filegroup")
load("@rules_oci//oci:defs.bzl", "oci_image", "oci_load", "oci_push")
load("@zml//bazel:zig.bzl", "zig_cc_binary")
zig_cc_binary(
name = "simple_layer",
main = "main.zig",
deps = [
"@zml//async",
"@zml//zml",
],
)
```
### 1. The Manifest
To get started, let's make bazel generate a manifest that will be used when
creating the TAR archive.
```python
# Manifest created from the simple_layer binary and friends
mtree_spec(
name = "mtree",
srcs = [":simple_layer"],
)
```
It is as easy as that: we define that we want everything needed for our binary
to be included in the manifest.
### 2. The TAR
Creating the TAR archive is equally easy; it's just a few more lines of bazel:
```python
# Create a tar archive from the above manifest
tar(
name = "archive",
srcs = [":simple_layer"],
args = [
"--options",
"zstd:compression-level=9",
],
compress = "zstd",
mtree = ":mtree",
)
```
Note that we specify high **zstd** compression, which serves two purposes:
avoiding large TAR files, and also: creating TAR files that are quick to
extract.
### 3. The Image
Creating the actual image is a two-step process:
- First, we use a rule that creates an
[OCI](https://github.com/opencontainers/image-spec) image (open image
format). But we're not done yet.
- Second, we force the actual OCI image to be built for `Linux X86_64` always,
regardless of the host we're building the image **on**.
```python
# The actual docker image, with entrypoint, created from tar archive
oci_image(
name = "image_",
base = "@distroless_cc_debian12",
entrypoint = ["./{}/simple_layer".format(package_name())],
tars = [":archive"],
)
```
See how we use string interpolation to fill in the folder name for the
container's entrypoint?
Next, we use a transition rule to force the container to be built for
Linux X86_64:
```python
# We always want to create the image for Linux
platform_transition_filegroup(
name = "image",
srcs = [":image_"],
target_platform = "@zml//platforms:linux_amd64",
)
```
And that's almost it! You can already build the image:
```
# cd examples
bazel build -c opt //simple_layer:image
INFO: Analyzed target //simple_layer:image (1 packages loaded, 8 targets configured).
INFO: Found 1 target...
Target //simple_layer:image up-to-date:
bazel-out/k8-dbg-ST-f832ad0148ae/bin/simple_layer/image_
INFO: Elapsed time: 0.279s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
```
... and inspect `./bazel-out`. Bazel tells you the exact path to the `image_`.
### 4. The Load
While inspecting the image is surely interesting, we usually want to load the
image so we can run it.
There is a bazel rule for that: `oci_load`. When we append the following lines
to `BUILD.bazel`:
```python
# Load will immediately load the image (eg: docker load)
oci_load(
name = "load",
image = ":image",
repo_tags = [
"distroless/simple_layer:latest",
],
)
```
... then we can load the image and run it with the following commands:
```
bazel run -c opt //simple_layer:load
docker run --rm distroless/simple_layer:latest
```
### 5. The Push
We just need to add one more target to the build file before we can push the
image to a container registry:
```python
# Bazel target for pushing the Linux image to the docker registry
oci_push(
name = "push",
image = ":image",
remote_tags = ["latest"],
# override with -- --repository foo.bar/org/image
repository = "index.docker.io/renerocksai/simple_layer",
)
```
This will push the `simple_layer` image with the tag `latest` (you can add more)
to the docker registry:
```
bazel run -c opt //simple_layer:push
```
When dealing with maybe a public and a private container registry - or if you
just want to try it out **right now**, you can always override the repository on
the command line:
```
bazel run -c opt //simple_layer:push -- --repository my.server.com/org/image
```
## Adding weights and data
Dockerizing a model that doesn't need any weights was easy. But what if you want
to create a complete care-free package of a model plus all required weights and
supporting files?
We'll use the [MNIST
example](https://github.com/zml/zml/tree/master/examples/mnist) to illustrate
how to build Docker images that also contain data files.
You can `bazel run -c opt //mnist:push -- --repository
index.docker.io/my_org/zml_mnist` in the `./examples` folder if you want to try
it out.
**Note: Please add one more of the following parameters to specify all the
platforms your containerized model should support.**
- NVIDIA CUDA: `--@zml//runtimes:cuda=true`
- AMD RoCM: `--@zml//runtimes:rocm=true`
- Google TPU: `--@zml//runtimes:tpu=true`
- AWS Trainium/Inferentia 2: `--@zml//runtimes:neuron=true`
- **AVOID CPU:** `--@zml//runtimes:cpu=false`
**Example:**
```
bazel run //mnist:push -c opt --@zml//runtimes:cuda=true -- --repository index.docker.io/my_org/zml_mnist
```
### Manifest and Archive
We only add one more target to the `BUILD.bazel` to construct the commandline
for the `entrypoint` of the container. All other steps basically remain the
same.
Let's start with creating the manifest and archive:
```python
load("@aspect_bazel_lib//lib:expand_template.bzl", "expand_template")
load("@aspect_bazel_lib//lib:tar.bzl", "mtree_spec", "tar")
load("@aspect_bazel_lib//lib:transitions.bzl", "platform_transition_filegroup")
load("@rules_oci//oci:defs.bzl", "oci_image", "oci_load", "oci_push")
load("@zml//bazel:zig.bzl", "zig_cc_binary")
# The executable
zig_cc_binary(
name = "mnist",
args = [
"$(location @com_github_ggerganov_ggml_mnist//file)",
"$(location @com_github_ggerganov_ggml_mnist_data//file)",
],
data = [
"@com_github_ggerganov_ggml_mnist//file",
"@com_github_ggerganov_ggml_mnist_data//file",
],
main = "mnist.zig",
deps = [
"@zml//async",
"@zml//zml",
],
)
# Manifest created from the executable (incl. its data: weights and dataset)
mtree_spec(
name = "mtree",
srcs = [":mnist"],
)
# Create a tar archive from the above manifest
tar(
name = "archive",
srcs = [":mnist"],
args = [
"--options",
"zstd:compression-level=9",
],
compress = "zstd",
mtree = ":mtree",
)
```
### Entrypoint
Our container entrypoint commandline is not just the name of the executable
anymore, as we need to pass the weights file and the test dataset to MNIST. A
simple string interpolation will not be enough.
For this reason, we use the `expand_template` rule, like this:
```python
# A convenience template for creating the "command line" for the entrypoint
expand_template(
name = "entrypoint",
data = [
":mnist",
"@com_github_ggerganov_ggml_mnist//file",
"@com_github_ggerganov_ggml_mnist_data//file",
],
substitutions = {
":model": "$(rlocationpath @com_github_ggerganov_ggml_mnist//file)",
":data": "$(rlocationpath @com_github_ggerganov_ggml_mnist_data//file)",
},
template = [
"./{}/mnist".format(package_name()),
"./{}/mnist.runfiles/:model".format(package_name()),
"./{}/mnist.runfiles/:data".format(package_name()),
],
)
```
- `data`, which is identical to `data` in the `mnist` target used for running
the model, tells bazel which files are needed.
- in `substitutions` we define what `:model` and `:data` need to be replaced
with
- in `template`, we construct the actual entrypoint conmandline
### Image, Push
From here on, everything is analog to the `simple_layer` example, with one
exception: in the `image_` target, we don't fill in the `entrypoint` directly,
but use the expanded template, which we conveniently named `entrypoint` above.
```python
# The actual docker image, with entrypoint, created from tar archive
oci_image(
name = "image_",
base = "@distroless_cc_debian12",
# the entrypoint comes from the expand_template rule `entrypoint` above
entrypoint = ":entrypoint",
tars = [":archive"],
)
# We always want to create the image for Linux
platform_transition_filegroup(
name = "image",
srcs = [":image_"],
target_platform = "@zml//platforms:linux_amd64",
)
# Load will immediately load the image (eg: docker load)
oci_load(
name = "load",
image = ":image",
repo_tags = [
"distroless/mnist:latest",
],
)
# Bazel target for pushing the Linux image to our docker registry
oci_push(
name = "push",
image = ":image",
remote_tags = ["latest"],
# override with -- --repository foo.bar/org/image
repository = "index.docker.io/steeve/mnist",
)
```
And that's it! With one simple bazel command, you can push a neatly packaged
MNIST model, including weights and dataset, to the docker registry:
```
bazel run //mnist:push --@zml//runtimes:cuda=true -- --repository index.docker.io/my_org/zml_mnist
``` | {
"source": "zml/zml",
"title": "docs/howtos/dockerize_models.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/dockerize_models.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 10829
} |
# How to port Pytorch models to ZML ?
## Requirements
We assume you already have a working ZML project,
and you can run it with a Bazel command like `bazel run //my_project:torch2zml`.
You can refer to [write your first model](../tutorials/write_first_model.md) to do so.
We also assume that you know enough Python to run the reference implementation.
## Overview
Porting Neural Network implementations can be tedious. Some small errors can
degrade the output of the model, in subtle or not so subtle ways. To track down
errors in a model with four thousand layers, we best be organized.
By the way if you are interested in a specific model, be careful that not all
implementations of a model you can find on Github are equivalent. Sometimes
people introduce subtle bugs when porting across Python libraries. Ideally use
the author's implementation, or at least one you have tested yourself.
**The recommended process is as follows:**
1. run the reference implementation on a known input, and sample layer activations
2. start a ZML project and load the sampled reference activations
3. start porting layers one by one, and test individual layers
4. end-to-end test the model
## Sampling reference activations
Pytorch exposes "forward hooks" that allow to inspect the input/output of each
`torch.nn.Module`. That way it is possible to create a dictionary with each
layer input/output, keyed by the name of the layer.
The main caveat is that if you have a functional implementation that doesn't
use `torch.nn.Module`, this technique won't work.
It is the easiest to start from a "huggingface" snippet, or a python script
that calls the model of your choice on an example input. eg:
```python
import torch
import transformers
model_path = "meta-llama/Meta-Llama-3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_path,
model_kwargs={"torch_dtype": torch.float16},
# device="cuda",
token=token,
)
prompt = "Q: What is the largest animal?\nA:"
output = pipeline(prompt)
print(output)
```
Then edit the script to import [zml_utils](https://github.com/zml/zml/blob/master/tools/zml_utils.py).
`zml_utils.py` is standalone and currently it's not distributed as a python
package, so the simplest way to use it, is to copy it next to your python
script. Then wrap the model/pipeline in a `zml_utils.ActivationCollector`. The
collector wraps the given model, and returns the original results AND the
activations in a dict of `torch.Tensor` when it's being called. After that, you
can save those activations to a `.pt` file.
```python
import torch
import transformers
import zml_utils
model_path = "meta-llama/Meta-Llama-3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_path,
model_kwargs={"torch_dtype": torch.float16},
# device="cuda",
)
model, tokenizer = pipeline.model, pipeline.tokenizer
prompt = "Q: What is the largest animal?\nA:"
# Wrap the pipeline, and extract activations.
# Activations files can be huge for big models,
# so let's stop collecting after 1000 layers.
pipeline = zml_utils.ActivationCollector(pipeline, max_layers=1000, stop_after_first_step=True)
output, activations = pipeline(prompt)
# `output` can be `None` if activations collection
# has stopped before the end of the inference
if output:
print(output)
# Save activations to a file.
filename = model_path.split("/")[-1] + ".activations.pt"
torch.save(activations, filename)
print(f"Saved {len(activations)} activations to {filename}")
```
Run this script: `python activations.py`
If you're using HuggingFace, make note of the local path where the model is
saved, it should be something like `~/.cache/huggingface/hub/...`. (and should
appear on the console when running the script). We will need it in the next
steps.
## Loading model and activations in ZML
Let's create a basic ZML program that loads the activations and the Pytorch
model. Put the following in `my_project/torch2zml.zig`.
```zig
const std = @import("std");
const log = std.log;
const asynk = @import("async");
const zml = @import("zml");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
try asynk.AsyncThread.main(gpa.allocator(), asyncMain, .{});
}
pub fn asyncMain() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
const args = try std.process.argsAlloc(allocator);
defer std.process.argsFree(allocator, args);
const model_path, const activations_path = args[1..3].*;
const activations = try zml.aio.torch.open(allocator, activations_path);
defer activations.deinit();
log.info("Found {} activations in {s}", .{ activations.buffers.count(), activations_path });
const model_weights = try zml.aio.detectFormatAndOpen(allocator, model_path);
defer model_weights.deinit();
log.info("Found {} model layers in {s}", .{ model_weights.buffers.count(), activations_path });
}
```
And add a `zig_cc_binary` target in `my_project/BUILD.bazel`:
```python
load("@zml//bazel:zig.bzl", "zig_cc_binary")
zig_cc_binary(
name = "torch2zml",
main = "torch2zml.zig",
deps = [
"@zml//async",
"@zml//zml",
],
)
```
Now check that the weights can be loaded correctly using the bazel CLI.
```bash
bazel build //my_project:torch2zml
./bazel-bin/my_project/torch2zml /path/to/my/model.safetensors.index.json ./my_project/Meta-Llama-3-8B.activations.pt
info: Found 1108 activations in /Users/guw/Documents/zml/models/torch2zml/Meta-Llama-3-8B.activations.pt
debug(zml_io): Loading shard: model-00004-of-00004.safetensors
debug(zml_io): Loading shard: model-00001-of-00004.safetensors
debug(zml_io): Loading shard: model-00002-of-00004.safetensors
debug(zml_io): Loading shard: model-00003-of-00004.safetensors
info: Found 291 model layers in /Users/guw/Documents/zml/models/torch2zml/Meta-Llama-3-8B.activations.pt
```
## Loading an individual layer
In the above Zig code, the `model_weights` struct is a wrapper around a flat
dictionary, containing an entry for each tensor in the model (similar to a
"state dict"). Manipulating a dictionary is generally not very convenient, so
let's convert it to a Zig struct.
Declare the following layer at the bottom of your file:
```zig
const Mlp = struct {
up_proj: zml.nn.Linear,
gate_proj: zml.nn.Linear,
down_proj: zml.nn.Linear,
};
```
The `zml.nn.Linear` is the equivalent of `torch.nn.Linear` and is defined by
its `weight` and optional `bias` tensors.
To create such a struct from our `model_weights` dictionary, we can use the
`zml.aio.populateModelWithPrefix` helper:
```zig
pub fn asyncMain() !void {
...
const mlp_shape = try zml.aio.populateModelWithPrefix(Mlp, allocator, model_weights, "model.layers.0.mlp");
log.info("layer.0.mlp: {}", .{mlp_shape});
}
```
Build and run, using previous commands.
Typical errors are of the form _"Layer not found: ..."_. This is typically due
to the naming of layers in Zig not matching the naming in the file.
Double-check everything and don't hesitate to print more things, e.g. in the
Python script. Alternatively, Huggingface's web-interface allows to peek into
`.safetensor` files.
## Testing an individual layer
Finally, we are going to write the actual math code for our `MLP` layer.
```zig
const Mlp = struct {
up_proj: zml.nn.Linear,
gate_proj: zml.nn.Linear,
down_proj: zml.nn.Linear,
pub fn forward(self: Mlp, x: Tensor) Tensor {
const proj = zml.call(self.up_proj, .forward, .{x});
var output = zml.call(self.gate_proj, .forward, .{x});
output = output.silu().mul(proj);
return zml.call(self.down_proj, .forward, .{output});
}
};
```
Note that we use `zml.call` instead of directly calling
`self.up_proj.forward(x)`. Calling `forward` directly results in the same
computation happening at runtime; but going through `zml.call` allows ZML to
generate an MLIR representation that is closer to the Zig code and therefore
easier to read.
We can test the MLP layer with the `zml.testing.testLayer` utility:
```zig
pub fn asyncMain() !void {
...
var ctx = try zml.Context.init();
defer ctx.deinit();
const platform = ctx.autoPlatform(.{});
const mlp_weights = try zml.aio.loadModelBuffers(Mlp, mlp_shape, model_weights, allocator, platform);
zml.testing.testLayer(platform, activations, "model.layers.0.mlp", mlp_shape, mlp_weights, 1e-3);
}
```
During this phase, you have three kinds of errors that can appear:
* Zig compilation errors: we've all been there, learning a new language
can be tough. Normally, the compiler should help you figure out what's wrong.
You can also check [ZML concepts](../learn/concepts.md) that explains types used
by ZML.
* Buffer not found errors: be careful that you need to use
the naming scheme of the inference pipeline when loading the activations.
Depending on how you write your code, you may have a different naming
convention in the model file and in the activation file. This is because in
Python, and in particular the `transformers` library, it's not uncommon to
wrap the model in a `Pipeline` object before using it. So a given layer may
be named `layer.0.mlp` in the model file, but its activations may be saved
under `model.layer.0.mlp`.
* MLIR compilation errors: typically this is caused by a mathematical
error in the `forward` function. To help here, you can log the shapes of the
input and intermediary values: `std.log.info("x: {}", .{x})`, and put similar
print statements in the Python code. You can also consider splitting a big
layer into smaller parts. Since our code only explicitly captures
`torch.nn.Module` input/output, you may need to modify the Python script to
add some extra tensors to the dictionary with example input/output of a
specific function.
## General tips
* Porting models can be hard, especially if the original code is messy, has
poor comments, behaves differently on different input shapes, or has unused
code paths. Start by identifying parts of the Python code which are
**unused**. It is common in research code that some code paths were written
for one paper, but didn't get used in subsequent papers.
* ZML offers a few Pytorch specific helpers in `zml.torch`; those operators are
offered to help you port models, but in general they may have weird APIs. If
you're lucky and the code you are porting has comments indicating "tags", eg
"C,W,H" of tensors, you can port this to actual tensor attributes using
`x.withTags(.{.c, .w, .h})`, and use those tags (eg `.c`) to refer to axes
instead of offsets. E.g. in Pytorch: `x.sum(0) # reduce over channel axis`
becomes `x.sum(.c)`. More on this topic in
["Working with tensors"](../tutorials/working_with_tensors.md). | {
"source": "zml/zml",
"title": "docs/howtos/howto_torch2zml.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/howto_torch2zml.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 10881
} |
# Huggingface Token Authentication
Some models have restrictions and may require some sort of approval or
agreement process, which, by consequence, **requires token-authentication with
Huggingface**.
Here is how you can generate a **"read-only public repositories"** access token
to log into your account on Huggingface, directly from `bazel`, in order to
download models.
* log in at [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
* click on "Create new token"
* give the token a name, eg `zml_public_repos`
* under _Repositories_, grant the following permission: "Read access to
contents of all public gated repos you can access".
* at the bottom, click on "Create token".
* copy the token by clicking `Copy`. **You won't be able to see it again.**
* the token looks something like `hf_abCdEfGhijKlM`.
* store the token on your machine (replace the placeholder with your actual
token):
You can use the `HUGGINGFACE_TOKEN` environment variable to store the token or use
its standard location:
```
mkdir -p $HOME/.cache/huggingface/; echo <hf_my_token> > "$HOME/.cache/huggingface/token"
```
Now you're ready to download a gated model like `Meta-Llama-3-8b`!
**Example:**
```
# requires token in $HOME/.cache/huggingface/token, as created by the
# `huggingface-cli login` command, or the `HUGGINGFACE_TOKEN` environment variable.
cd examples
bazel run -c opt //llama:Meta-Llama-3-8b
bazel run -c opt //llama:Meta-Llama-3-8b -- --promt="Once upon a time,"
``` | {
"source": "zml/zml",
"title": "docs/howtos/huggingface_access_token.md",
"url": "https://github.com/zml/zml/blob/master/docs/howtos/huggingface_access_token.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 1504
} |
# ZML Concepts
## Model lifecycle
ZML is an inference stack that helps running Machine Learning (ML) models, and
particulary Neural Networks (NN).
The lifecycle of a model is implemented in the following steps:
1. Open the model file and read the shapes of the weights, but leave the
weights on the disk.
2. Using the loaded shapes and optional metadata, instantiate a model struct
with `Tensor`s, representing the shape and layout of each layer of the NN.
3. Compile the model struct and it's `forward` function into an accelerator
specific executable. The `forward` function describes the mathematical
operations corresponding to the model inference.
4. Load the model weights from disk, onto the accelerator memory.
5. Bind the model weights to the executable.
6. Load some user inputs, and copy them to the accelerator.
7. Call the executable on the user inputs.
8. Fetch the returned model output from accelerator into host memory, and
finally present it to the user.
9. When all user inputs have been processed, free the executable resources and
the associated weights.
**Some details:**
Note that the compilation and weight loading steps are both bottlenecks to your
model startup time, but they can be done in parallel. **ZML provides
asynchronous primitives** to make that easy.
The **compilation can be cached** across runs, and if you're always using the
same model architecture with the same shapes, it's possible to by-pass it
entirely.
The accelerator is typically a GPU, but can be another chip, or even the CPU
itself, churning vector instructions.
## Tensor Bros.
In ZML, we leverage Zig's static type system to differentiate between a few
concepts, hence we not only have a `Tensor` to work with, like other ML
frameworks, but also `Buffer`, `HostBuffer`, and `Shape`.
Let's explain all that.
* `Shape`: _describes_ a multi-dimension array.
- `Shape.init(.{16}, .f32)` represents a vector of 16 floats of 32 bits
precision.
- `Shape.init(.{512, 1024}, .f16)` represents a matrix of `512*1024` floats
of 16 bits precision, i.e. a `[512][1024]f16` array.
A `Shape` is only **metadata**, it doesn't point to or own any memory. The
`Shape` struct can also represent a regular number, aka a scalar:
`Shape.init(.{}, .i32)` represents a 32-bit signed integer.
* `HostBuffer`: _is_ a multi-dimensional array, whose memory is allocated **on
the CPU**.
- points to the slice of memory containing the array
- typically owns the underlying memory - but has a flag to remember when it
doesn't.
* `Buffer`: _is_ a multi-dimension array, whose memory is allocated **on an
accelerator**.
- contains a handle that the ZML runtime can use to convert it into a
physical address, but there is no guarantee this address is visible from
the CPU.
- can be created by loading weights from disk directly to the device via
`zml.aio.loadBuffers`
- can be created by calling `HostBuffer.toDevice(accelerator)`.
* `Tensor`: is a mathematical object representing an intermediary result of a
computation.
- is basically a `Shape` with an attached MLIR value representing the
mathematical operation that produced this `Tensor`.
## The model struct
The model struct is the Zig code that describes your Neural Network (NN).
Let's look a the following model architecture:

This is how we can describe it in a Zig struct:
```zig
const Model = struct {
input_layer: zml.Tensor,
output_layer: zml.Tensor,
pub fn forward(self: Model, input: zml.Tensor) zml.Tensor {
const hidden = self.input_layer.matmul(input);
const output = self.output_layer.matmul(hidden);
return output;
}
}
```
NNs are generally seen as a composition of smaller NNs, which are split into
layers. ZML makes it easy to mirror this structure in your code.
```zig
const Model = struct {
input_layer: MyOtherLayer,
output_layer: MyLastLayer,
pub fn forward(self: Model, input: zml.Tensor) zml.Tensor {
const hidden = self.input_layer.forward(input);
const output = self.output_layer.forward(hidden);
return output;
}
}
```
`zml.nn` module provides a number of well-known layers to more easily bootstrap
models.
Since the `Model` struct contains `Tensor`s, it is only ever useful during the
compilation stage, but not during inference. If we want to represent the model
with actual `Buffer`s, we can use the `zml.Bufferize(Model)`, which is a mirror
struct of `Model` but with a `Buffer` replacing every `Tensor`.
## Strong type checking
Let's look at the model life cycle again, but this time annotated with the
corresponding types.
1. Open the model file and read the shapes of the weights -> `zml.HostBuffer`
(using memory mapping, no actual copies happen yet)
2. Instantiate a model struct -> `Model` struct (with `zml.Tensor` inside)
3. Compile the model struct and its `forward` function into an executable.
`foward` is a `Tensor -> Tensor` function, executable is a
`zml.FnExe(Model.forward)`
4. Load the model weights from disk, onto accelerator memory ->
`zml.Bufferized(Model)` struct (with `zml.Buffer` inside)
5. Bind the model weights to the executable `zml.ModuleExe(Model.forward)`
6. Load some user inputs (custom struct), encode them into arrays of numbers
(`zml.HostBuffer`), and copy them to the accelerator (`zml.Buffer`).
7. Call the executable on the user inputs. `module.call` accepts `zml.Buffer`
arguments and returns `zml.Buffer`
8. Return the model output (`zml.Buffer`) to the host (`zml.HostBuffer`),
decode it (custom struct) and finally return to the user. | {
"source": "zml/zml",
"title": "docs/learn/concepts.md",
"url": "https://github.com/zml/zml/blob/master/docs/learn/concepts.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 5795
} |
# ZML Style Guide
We prefer to keep it simple and adhere to the [Zig Style Guide](https://ziglang.org/documentation/0.13.0/#Style-Guide).
We use ZLS to auto-format code.
In addition, we try to adhere to the following house-rules:
### We favor:
```zig
const x: Foo = .{ .bar = 1 }
// over: const x = Foo{ .bar = 1}
pub fn method(self: Foo) void
// over: pub fn method(self: Self) void
const foo = import("foo.zig"); foo.bar()
// over: const bar = import("foo.zig").bar;
// bar();
const Foo = import("foo.zig").Foo
// over: const Foo = import("Foo.zig")
//
// Importing types directly instead of using
// a namespace should be reserved for very
// frequent types.
/// Foo does X and returns Y
pub fn foo() usize {
// Descriptive doc comments over imperative ones
```
As with the Zig Style Guide: use common sense 😊. | {
"source": "zml/zml",
"title": "docs/misc/style_guide.md",
"url": "https://github.com/zml/zml/blob/master/docs/misc/style_guide.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 835
} |
# Getting Started with ZML
In this tutorial, we will install `ZML` and run a few models locally.
## Prerequisites
First, let's checkout the ZML codebase. In a terminal, run:
```
git clone https://github.com/zml/zml.git
cd zml/
```
We use `bazel` to build ZML and its dependencies. We recommend to download it
through `bazelisk`, a version manager for `bazel`.
### Install Bazel:
**macOs:**
```
brew install bazelisk
```
**Linux:**
```
curl -L -o /usr/local/bin/bazel 'https://github.com/bazelbuild/bazelisk/releases/download/v1.25.0/bazelisk-linux-amd64'
chmod +x /usr/local/bin/bazel
```
## Run a pre-packaged model
ZML comes with a variety of model examples. See also our reference implementations in the [examples](https://github.com/zml/zml/tree/master/examples/) folder.
### MNIST
The [classic](https://en.wikipedia.org/wiki/MNIST_database) handwritten digits
recognition task. The model is tasked to recognize a handwritten digit, which
has been converted to a 28x28 pixel monochrome image. `Bazel` will download a
pre-trained model, and the test dataset. The program will load the model,
compile it, and classify a randomly picked example from the test dataset.
On the command line:
```
cd examples
bazel run -c opt //mnist
```
### Llama
Llama is a family of "Large Language Models", trained to generate text, based
on the beginning of a sentence/book/article. This "beginning" is generally
referred to as the "prompt".
#### Meta Llama 3.1 8B
This model has restrictions, see
[here](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It **requires
approval from Meta on Huggingface**, which can take a few hours to get granted.
While waiting for approval, you can already
[generate your Huggingface access token](../howtos/huggingface_access_token.md).
Once you've been granted access, you're ready to download a gated model like
`Meta-Llama-3.1-8B-Instruct`!
```
# requires token in $HOME/.cache/huggingface/token, as created by the
# `huggingface-cli login` command, or the `HUGGINGFACE_TOKEN` environment variable.
cd examples
bazel run -c opt //llama:Llama-3.1-8B-Instruct
bazel run -c opt //llama:Llama-3.1-8B-Instruct -- --prompt="What is the capital of France?"
```
You can also try `Llama-3.1-70B-Instruct` if you have enough memory.
### Meta Llama 3.2 1B
Like the 8B model above, this model also requires approval. See
[here](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) for access requirements.
```
cd examples
bazel run -c opt //llama:Llama-3.2-1B-Instruct
bazel run -c opt //llama:Llama-3.2-1B-Instruct -- --prompt="What is the capital of France?"
```
For a larger 3.2 model, you can also try `Llama-3.2-3B-Instruct`.
## Run Tests
```
bazel test //zml:test
```
## Running Models on GPU / TPU
You can compile models for accelerator runtimes by appending one or more of the
following arguments to the command line when compiling or running a model:
- NVIDIA CUDA: `--@zml//runtimes:cuda=true`
- AMD RoCM: `--@zml//runtimes:rocm=true`
- Google TPU: `--@zml//runtimes:tpu=true`
- AWS Trainium/Inferentia 2: `--@zml//runtimes:neuron=true`
- **AVOID CPU:** `--@zml//runtimes:cpu=false`
The latter, avoiding compilation for CPU, cuts down compilation time.
So, to run the OpenLLama model from above on your host sporting an NVIDIA GPU,
run the following:
```
cd examples
bazel run -c opt //llama:Llama-3.2-1B-Instruct \
--@zml//runtimes:cuda=true \
-- --prompt="What is the capital of France?"
```
## Where to go next:
In [Deploying Models on a Server](../howtos/deploy_on_server.md), we show how you can
cross-compile and package for a specific architecture, then deploy and run your
model. Alternatively, you can also [dockerize](../howtos/dockerize_models.md) your
model.
You might also want to check out the
[examples](https://github.com/zml/zml/tree/master/examples), read through the
[documentation](../README.md), start
[writing your first model](../tutorials/write_first_model.md), or read about more
high-level [ZML concepts](../learn/concepts.md). | {
"source": "zml/zml",
"title": "docs/tutorials/getting_started.md",
"url": "https://github.com/zml/zml/blob/master/docs/tutorials/getting_started.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 4098
} |
# Simplifying Dimension Handling with Tagged Tensors
### Coming Soon...
See [ZML Concepts](../learn/concepts.md) for an introduction to Tensors and Shapes. | {
"source": "zml/zml",
"title": "docs/tutorials/working_with_tensors.md",
"url": "https://github.com/zml/zml/blob/master/docs/tutorials/working_with_tensors.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 157
} |
# Writing your first model
**In this short guide, we will do the following:**
- clone ZML to work directly within the prepared example folder
- add Zig code to implement our model
- add some Bazel to integrate our code with ZML
- no weights files or anything external is required for this example
The reason we're doing our exercise in the `examples` folder is because it's
especially prepared for new ZML projects. It contains everything needed for ZML
development. From `bazel` configs to `vscode` settings, and `neovim` LSP
support. The `examples` folder serves as a cookiecutter ZML project example,
with just a few example models added already.
**Note:** _The `examples` folder is self-contained. You **can** make a copy of
it to a location outside of the ZML repository. Simply remove all examples you
don't need and use it as a template for your own projects._
So, let's get started, shall we?
**If you haven't done so already, please [install bazel](../tutorials/getting_started.md)**.
Check out the ZML repository. In the `examples` directory, create a new folder
for your project. Let's call it `simple_layer`.
```
git clone https://github.com/zml/zml.git
cd zml/examples
mkdir -p simple_layer
```
... and add a file `main.zig` to it, along with a bazel build file:
```
touch simple_layer/main.zig
touch simple_layer/BUILD.bazel
```
By the way, you can access the complete source code of this walkthrough here:
- [main.zig](https://github.com/zml/zml/tree/master/examples/simple_layer/main.zig)
- [BUILD.bazel](https://github.com/zml/zml/tree/master/examples/simple_layer/BUILD.bazel)
## The high-level Overview
Before firing up our editor, let's quickly talk about a few basic ZML
fundamentals.
In ZML, we describe a _Module_, which represents our AI model, as a Zig
`struct`. That struct can contain Tensor fields that are used for computation,
e.g. weights and biases. In the _forward_ function of a Module, we describe the
computation by calling tensor operations like _mul_, _add_, _dotGeneral_,
_conv2D_, etc., or even nested Modules.
ZML creates an MLIR representation of the computation when we compile the
Module. For compilation, only the _Shapes_ of all tensors must be known. No
actual tensor data is needed at this step. This is important for large models:
we can compile them while the actual weight data is being fetched from disk.
To accomplish this, ZML uses a _BufferStore_. The _BufferStore_ knows how to
only load shapes and when to load actual tensor data. In our example, we will
fake the _BufferStore_ a bit: we won't load from disk; we'll use float arrays
instead.
After compilation is done (and the _BufferStore_ has finished loading weights),
we can send the weights from the _BufferStore_ to our computation device. That
produces an _executable_ module which we can call with different _inputs_.
In our example, we then copy the result from the computation device to CPU
memory and print it.
**So the steps for us are:**
- describe the computation as ZML _Module_, using tensor operations
- create a _BufferStore_ that provides _Shapes_ and data of weights and bias
(ca. 5 lines of code).
- compile the _Module_ **asynchronously**
- make the compiled _Module_ send the weights (and bias) to the computation
device utilizing the _BufferStore_, producing an _executable_ module
- prepare input tensor and call the _executable_ module.
- get the result back to CPU memory and print it
If you like to read more about the underlying concepts of the above, please see
[ZML Concepts](../learn/concepts.md).
## The code
Let's start by writing some Zig code, importing ZML and often-used modules:
```zig
const std = @import("std");
const zml = @import("zml");
const asynk = @import("async");
// shortcut to the asyncc function in the asynk module
const asyncc = asynk.asyncc;
```
You will use above lines probably in all ZML projects. Also, note that **ZML is
async** and comes with its own async runtime, thanks to
[zigcoro](https://github.com/rsepassi/zigcoro).
### Defining our Model
We will start with a very simple "Model". One that resembles a "multiply and
add" operation.
```zig
/// Model definition
const Layer = struct {
bias: ?zml.Tensor = null,
weight: zml.Tensor,
pub fn forward(self: Layer, x: zml.Tensor) zml.Tensor {
var y = self.weight.mul(x);
if (self.bias) |bias| {
y = y.add(bias);
}
return y;
}
};
```
You see, in ZML AI models are just structs with a forward function!
There are more things to observe:
- forward functions typically take Tensors as inputs, and return Tensors.
- more advanced use-cases are passing in / returning structs or tuples, like
`struct { Tensor, Tensor }` as an example for a tuple of two tensors.
You can see such use-cases, for example in the
[Llama Model](https://github.com/zml/zml/tree/master/examples/llama)
- in the model, tensors may be optional. As is the case with `bias`.
### Adding a main() function
ZML code is async. Hence, We need to provide an async main function. It works
like this:
```zig
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
try asynk.AsyncThread.main(gpa.allocator(), asyncMain);
}
pub fn asyncMain() !void {
// ...
```
The above `main()` function only creates an allocator and an async main thread
that executes our `asyncMain()` function by calling it with no (`.{}`)
arguments.
So, let's start with the async main function:
```zig
pub fn asyncMain() !void {
// Short lived allocations
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Arena allocator for BufferStore etc.
var arena_state = std.heap.ArenaAllocator.init(allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
// Create ZML context
var context = try zml.Context.init();
defer context.deinit();
const platform = context.autoPlatform(.{});
...
}
```
This is boilerplate code that provides a general-purpose allocator and, for
convenience, an arena allocator that we will use later. The advantage of arena
allocators is that you don't need to deallocate individual allocations; you
simply call `.deinit()` to deinitialize the entire arena instead!
We also initialize the ZML context `context` and get our CPU `platform`
automatically.
### The BufferStore
Next, we need to set up the concrete weight and bias tensors for our model.
Typically, we would load them from disk. But since our example works without
stored weights, we are going to create a BufferStore manually, containing
_HostBuffers_ (buffers on the CPU) for both the `weight` and the `bias` tensor.
A BufferStore basically contains a dictionary with string keys that match the
name of the struct fields of our `Layer` struct. So, let's create this
dictionary:
```zig
// Our weights and bias to use
var weights = [3]f16{ 2.0, 2.0, 2.0 };
var bias = [3]f16{ 1.0, 2.0, 3.0 };
const input_shape = zml.Shape.init(.{3}, .f16);
// We manually produce a BufferStore. You would not normally do that.
// A BufferStore is usually created by loading model data from a file.
var buffers: zml.aio.BufferStore.Buffers = .{};
try buffers.put(arena, "weight", zml.HostBuffer.fromArray(&weights));
try buffers.put(arena, "bias", zml.HostBuffer.fromArray(&bias));
// the actual BufferStore
const bs: zml.aio.BufferStore = .{
.arena = arena_state,
.buffers = buffers,
};
```
Our weights are `{2.0, 2.0, 2.0}`, and our bias is just `{1.0, 2.0, 3.0}`. The
shape of the weight and bias tensors is `{3}`, and because of that, the **shape
of the input tensor** is also going to be `{3}`!
Note that `zml.Shape` always takes the data type associated with the tensor. In
our example, that is `f16`, expressed as the enum value `.f16`.
### Compiling our Module for the accelerator
We're only going to use the CPU for our simple model, but we need to compile the
`forward()` function nonetheless. This compilation is usually done
asynchronously. That means, we can continue doing other things while the module
is compiling:
```zig
// A clone of our model, consisting of shapes. We only need shapes for compiling.
// We use the BufferStore to infer the shapes.
const model_shapes = try zml.aio.populateModel(Layer, allocator, bs);
// Start compiling. This uses the inferred shapes from the BufferStore.
// The shape of the input tensor, we have to pass in manually.
var compilation = try asyncc(
zml.compileModel,
.{ allocator, Layer.forward, model_shapes, .{input_shape}, platform },
);
// Produce a bufferized weights struct from the fake BufferStore.
// This is like the inferred shapes, but with actual values.
// We will need to send those to the computation device later.
var model_weights = try zml.aio.loadBuffers(Layer, .{}, bs, arena, platform);
defer zml.aio.unloadBuffers(&model_weights); // for good practice
// Wait for compilation to finish
const compiled = try compilation.awaitt();
```
Compiling is happening in the background via the `asyncc` function. We call
`asyncc` with the `zml.compileModel` function and its arguments
separately. The arguments themselves are basically the shapes of the weights in
the BufferStore, the `.forward` function name in order to compile
`Layer.forward`, the shape of the input tensor(s), and the platform for which to
compile (we used auto platform).
### Creating the Executable Model
Now that we have compiled the module utilizing the shapes, we turn it into an
executable.
```zig
// pass the model weights to the compiled module to create an executable module
// all required memory has been allocated in `compile`.
var executable = compiled.prepare(model_weights);
defer executable.deinit();
```
### Calling / running the Model
The executable can now be invoked with an input of our choice.
To create the `input`, we directly use `zml.Buffer` by calling
`zml.Buffer.fromArray()`. It's important to note that `Buffer`s reside in
_accelerator_ (or _device_) memory, which is precisely where the input needs to
be for the executable to process it on the device.
For clarity, let's recap the distinction: `HostBuffer`s are located in standard
_host_ memory, which is accessible by the CPU. When we initialized the weights,
we used `HostBuffers` to set up the `BufferStore`. This is because the
`BufferStore` typically loads weights from disk into `HostBuffer`s, and then
converts them into `Buffer`s when we call `loadBuffers()`.
However, for inputs, we bypass the `BufferStore` and create `Buffer`s directly
in device memory.
```zig
// prepare an input buffer
// Here, we use zml.HostBuffer.fromSlice to show how you would create a
// HostBuffer with a specific shape from an array.
// For situations where e.g. you have an [4]f16 array but need a .{2, 2} input
// shape.
var input = [3]f16{ 5.0, 5.0, 5.0 };
var input_buffer = try zml.Buffer.from(
platform,
zml.HostBuffer.fromSlice(input_shape, &input),
);
defer input_buffer.deinit();
// call our executable module
var result: zml.Buffer = executable.call(.{input_buffer});
defer result.deinit();
// fetch the result buffer to CPU memory
const cpu_result = try result.toHostAlloc(arena);
std.debug.print(
"\n\nThe result of {d} * {d} + {d} = {d}\n",
.{ &weights, &input, &bias, cpu_result.items(f16) },
);
```
Note that the result of a computation is usually residing in the memory of the
computation device, so with `.toHostAlloc()` we bring it back to CPU memory in
the form of a `HostBuffer`. After that, we can print it.
In order to print it, we need to tell the host buffer how to interpret the
memory. We do that by calling `.items(f16)`, making it cast the memory to `f16`
items.
And that's it! Now, let's have a look at building and actually running this
example!
## Building it
As mentioned already, ZML uses Bazel; so to build our model, we just need to
create a simple `BUILD.bazel` file, next to the `main.zig` file, like this:
```python
load("@zml//bazel:zig.bzl", "zig_cc_binary")
zig_cc_binary(
name = "simple_layer",
main = "main.zig",
deps = [
"@zml//async",
"@zml//zml",
],
)
```
To produce an executable, we import `zig_cc_binary` from the zig rules, and
pass it a name and the zig file we just wrote. The dependencies in `deps` are
what's needed for a basic ZML executable and correlate with our imports at the
top of the Zig file:
```zig
const zml = @import("zml");
const asynk = @import("async");
```
## Running it
With everything in place now, running the model is easy:
```
# run release (-c opt)
cd examples
bazel run -c opt //simple_layer
# compile and run debug version
bazel run //simple_layer
```
And voila! Here's the output:
```
bazel run -c opt //simple_layer
INFO: Analyzed target //simple_layer:simple_layer (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //simple_layer:simple_layer up-to-date:
bazel-bin/simple_layer/simple_layer
INFO: Elapsed time: 0.120s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/simple_layer/simple_layer
info(pjrt): Loaded library: libpjrt_cpu.dylib
info(zml_module): Compiling main.Layer.forward with { Shape({3}, dtype=.f16) }
The result of { 2, 2, 2 } * { 5, 5, 5 } + { 1, 2, 3 } = { 11, 12, 13 }
```
---
You can access the complete source code of this walkthrough here:
- [main.zig](https://github.com/zml/zml/tree/master/examples/simple_layer/main.zig)
- [BUILD.bazel](https://github.com/zml/zml/tree/master/examples/simple_layer/BUILD.bazel)
## The complete example
```zig
const std = @import("std");
const zml = @import("zml");
const asynk = @import("async");
const asyncc = asynk.asyncc;
/// Model definition
const Layer = struct {
bias: ?zml.Tensor = null,
weight: zml.Tensor,
pub fn forward(self: Layer, x: zml.Tensor) zml.Tensor {
var y = self.weight.mul(x);
if (self.bias) |bias| {
y = y.add(bias);
}
return y;
}
};
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
try asynk.AsyncThread.main(gpa.allocator(), asyncMain);
}
pub fn asyncMain() !void {
// Short lived allocations
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer _ = gpa.deinit();
const allocator = gpa.allocator();
// Arena allocator for BufferStore etc.
var arena_state = std.heap.ArenaAllocator.init(allocator);
defer arena_state.deinit();
const arena = arena_state.allocator();
// Create ZML context
var context = try zml.Context.init();
defer context.deinit();
const platform = context.autoPlatform(.{});
// Our weights and bias to use
var weights = [3]f16{ 2.0, 2.0, 2.0 };
var bias = [3]f16{ 1.0, 2.0, 3.0 };
const input_shape = zml.Shape.init(.{3}, .f16);
// We manually produce a BufferStore. You would not normally do that.
// A BufferStore is usually created by loading model data from a file.
var buffers: zml.aio.BufferStore.Buffers = .{};
try buffers.put(arena, "weight", zml.HostBuffer.fromArray(&weights));
try buffers.put(arena, "bias", zml.HostBuffer.fromArray(&bias));
// the actual BufferStore
const bs: zml.aio.BufferStore = .{
.arena = arena_state,
.buffers = buffers,
};
// A clone of our model, consisting of shapes. We only need shapes for
// compiling. We use the BufferStore to infer the shapes.
const model_shapes = try zml.aio.populateModel(Layer, allocator, bs);
// Start compiling. This uses the inferred shapes from the BufferStore.
// The shape of the input tensor, we have to pass in manually.
var compilation = try asyncc(zml.compileModel, .{ allocator, Layer.forward, model_shapes, .{input_shape}, platform });
// Produce a bufferized weights struct from the fake BufferStore.
// This is like the inferred shapes, but with actual values.
// We will need to send those to the computation device later.
var model_weights = try zml.aio.loadBuffers(Layer, .{}, bs, arena, platform);
defer zml.aio.unloadBuffers(&model_weights); // for good practice
// Wait for compilation to finish
const compiled = try compilation.awaitt();
// pass the model weights to the compiled module to create an executable
// module
var executable = compiled.prepare(model_weights);
defer executable.deinit();
// prepare an input buffer
// Here, we use zml.HostBuffer.fromSlice to show how you would create a
// HostBuffer with a specific shape from an array.
// For situations where e.g. you have an [4]f16 array but need a .{2, 2}
// input shape.
var input = [3]f16{ 5.0, 5.0, 5.0 };
var input_buffer = try zml.Buffer.from(
platform,
zml.HostBuffer.fromSlice(input_shape, &input),
);
defer input_buffer.deinit();
// call our executable module
var result: zml.Buffer = executable.call(.{input_buffer});
defer result.deinit();
// fetch the result to CPU memory
const cpu_result = try result.toHostAlloc(arena);
std.debug.print(
"\n\nThe result of {d} * {d} + {d} = {d}\n",
.{ &weights, &input, &bias, cpu_result.items(f16) },
);
}
```
## Where to go from here
- [Add some weights files to your model](../howtos/add_weights.md)
- [Run the model on GPU](../tutorials/getting_started.md)
- [Deploy the model on a server](../howtos/deploy_on_server.md)
- [Dockerize this model](../howtos/dockerize_models.md)
- [Learn more about ZML concepts](../learn/concepts.md)
- [Find out how to best port PyTorch models](../howtos/howto_torch2zml.md) | {
"source": "zml/zml",
"title": "docs/tutorials/write_first_model.md",
"url": "https://github.com/zml/zml/blob/master/docs/tutorials/write_first_model.md",
"date": "2024-09-17T09:13:32",
"stars": 2086,
"description": "Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild",
"file_size": 17807
} |
# Contributing to lume
We deeply appreciate your interest in contributing to lume! Whether you're reporting bugs, suggesting enhancements, improving docs, or submitting pull requests, your contributions help improve the project for everyone.
## Reporting Bugs
If you've encountered a bug in the project, we encourage you to report it. Please follow these steps:
1. **Check the Issue Tracker**: Before submitting a new bug report, please check our issue tracker to see if the bug has already been reported.
2. **Create a New Issue**: If the bug hasn't been reported, create a new issue with:
- A clear title and detailed description
- Steps to reproduce the issue
- Expected vs actual behavior
- Your environment (macOS version, lume version)
- Any relevant logs or error messages
3. **Label Your Issue**: Label your issue as a `bug` to help maintainers identify it quickly.
## Suggesting Enhancements
We're always looking for suggestions to make lume better. If you have an idea:
1. **Check Existing Issues**: See if someone else has already suggested something similar.
2. **Create a New Issue**: If your enhancement is new, create an issue describing:
- The problem your enhancement solves
- How your enhancement would work
- Any potential implementation details
- Why this enhancement would benefit lume users
## Documentation
Documentation improvements are always welcome. You can:
- Fix typos or unclear explanations
- Add examples and use cases
- Improve API documentation
- Add tutorials or guides
For detailed instructions on setting up your development environment and submitting code contributions, please see our [Development.md](docs/Development.md) guide.
Feel free to join our [Discord community](https://discord.com/invite/mVnXXpdE85) to discuss ideas or get help with your contributions. | {
"source": "trycua/lume",
"title": "CONTRIBUTING.md",
"url": "https://github.com/trycua/lume/blob/main/CONTRIBUTING.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",
"file_size": 1841
} |
<div align="center">
<h1>
<div class="image-wrapper" style="display: inline-block;">
<picture>
<source media="(prefers-color-scheme: dark)" alt="logo" height="150" srcset="img/logo_white.png" style="display: block; margin: auto;">
<source media="(prefers-color-scheme: light)" alt="logo" height="150" srcset="img/logo_black.png" style="display: block; margin: auto;">
<img alt="Shows my svg">
</picture>
</div>
[](#)
[](#)
[](#install)
[](https://discord.com/invite/mVnXXpdE85)
</h1>
</div>
**lume** is a lightweight Command Line Interface and local API server to create, run and manage macOS and Linux virtual machines (VMs) with near-native performance on Apple Silicon, using Apple's `Virtualization.Framework`.
### Run prebuilt macOS images in just 1 step
<div align="center">
<img src="img/cli.png" alt="lume cli">
</div>
```bash
lume run macos-sequoia-vanilla:latest
```
For a python interface, check out [pylume](https://github.com/trycua/pylume).
## Usage
```bash
lume <command>
Commands:
lume create <name> Create a new macOS or Linux VM
lume run <name> Run a VM
lume ls List all VMs
lume get <name> Get detailed information about a VM
lume set <name> Modify VM configuration
lume stop <name> Stop a running VM
lume delete <name> Delete a VM
lume pull <image> Pull a macOS image from container registry
lume clone <name> <new-name> Clone an existing VM
lume images List available macOS images in local cache
lume ipsw Get the latest macOS restore image URL
lume prune Remove cached images
lume serve Start the API server
Options:
--help Show help [boolean]
--version Show version number [boolean]
Command Options:
create:
--os <os> Operating system to install (macOS or linux, default: macOS)
--cpu <cores> Number of CPU cores (default: 4)
--memory <size> Memory size, e.g., 8GB (default: 4GB)
--disk-size <size> Disk size, e.g., 50GB (default: 40GB)
--display <res> Display resolution (default: 1024x768)
--ipsw <path> Path to IPSW file or 'latest' for macOS VMs
run:
--no-display Do not start the VNC client app
--shared-dir <dir> Share directory with VM (format: path[:ro|rw])
--mount <path> For Linux VMs only, attach a read-only disk image
--registry <url> Container registry URL (default: ghcr.io)
--organization <org> Organization to pull from (default: trycua)
--vnc-port <port> Port to use for the VNC server (default: 0 for auto-assign)
set:
--cpu <cores> New number of CPU cores (e.g., 4)
--memory <size> New memory size (e.g., 8192MB or 8GB)
--disk-size <size> New disk size (e.g., 40960MB or 40GB)
--display <res> New display resolution in format WIDTHxHEIGHT (e.g., 1024x768)
delete:
--force Force deletion without confirmation
pull:
--registry <url> Container registry URL (default: ghcr.io)
--organization <org> Organization to pull from (default: trycua)
serve:
--port <port> Port to listen on (default: 3000)
```
## Install
```bash
brew tap trycua/lume
brew install lume
```
You can also download the `lume.pkg.tar.gz` archive from the [latest release](https://github.com/trycua/lume/releases), extract it, and install the package manually.
## Prebuilt Images
Pre-built images are available in the registry [ghcr.io/trycua](https://github.com/orgs/trycua/packages).
These images come with an SSH server pre-configured and auto-login enabled.
For the security of your VM, change the default password `lume` immediately after your first login.
| Image | Tag | Description | Size |
|-------|------------|-------------|------|
| `macos-sequoia-vanilla` | `latest`, `15.2` | macOS Sequoia 15.2 | 40GB |
| `macos-sequoia-xcode` | `latest`, `15.2` | macOS Sequoia 15.2 with Xcode command line tools | 50GB |
| `ubuntu-noble-vanilla` | `latest`, `24.04.1` | [Ubuntu Server for ARM 24.04.1 LTS](https://ubuntu.com/download/server/arm) with Ubuntu Desktop | 20GB |
For additional disk space, resize the VM disk after pulling the image using the `lume set <name> --disk-size <size>` command.
## Local API Server
`lume` exposes a local HTTP API server that listens on `http://localhost:3000/lume`, enabling automated management of VMs.
```bash
lume serve
```
For detailed API documentation, please refer to [API Reference](docs/API-Reference.md).
## Docs
- [API Reference](docs/API-Reference.md)
- [Development](docs/Development.md)
- [FAQ](docs/FAQ.md)
## Contributing
We welcome and greatly appreciate contributions to lume! Whether you're improving documentation, adding new features, fixing bugs, or adding new VM images, your efforts help make lume better for everyone. For detailed instructions on how to contribute, please refer to our [Contributing Guidelines](CONTRIBUTING.md).
Join our [Discord community](https://discord.com/invite/mVnXXpdE85) to discuss ideas or get assistance.
## License
lume is open-sourced under the MIT License - see the [LICENSE](LICENSE) file for details.
## Trademarks
Apple, macOS, and Apple Silicon are trademarks of Apple Inc. Ubuntu and Canonical are registered trademarks of Canonical Ltd. This project is not affiliated with, endorsed by, or sponsored by Apple Inc. or Canonical Ltd.
## Stargazers over time
[](https://starchart.cc/trycua/lume) | {
"source": "trycua/lume",
"title": "README.md",
"url": "https://github.com/trycua/lume/blob/main/README.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",
"file_size": 6036
} |
## API Reference
<details open>
<summary><strong>Create VM</strong> - POST /vms</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"name": "lume_vm",
"os": "macOS",
"cpu": 2,
"memory": "4GB",
"diskSize": "64GB",
"display": "1024x768",
"ipsw": "latest"
}' \
http://localhost:3000/lume/vms
```
</details>
<details open>
<summary><strong>Run VM</strong> - POST /vms/:name/run</summary>
```bash
# Basic run
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
http://localhost:3000/lume/vms/my-vm-name/run
# Run with VNC client started and shared directory
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"noDisplay": false,
"sharedDirectories": [
{
"hostPath": "~/Projects",
"readOnly": false
}
]
}' \
http://localhost:3000/lume/vms/lume_vm/run
```
</details>
<details open>
<summary><strong>List VMs</strong> - GET /vms</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
http://localhost:3000/lume/vms
```
```
[
{
"name": "my-vm",
"state": "stopped",
"os": "macOS",
"cpu": 2,
"memory": "4GB",
"diskSize": "64GB"
},
{
"name": "my-vm-2",
"state": "stopped",
"os": "linux",
"cpu": 2,
"memory": "4GB",
"diskSize": "64GB"
}
]
```
</details>
<details open>
<summary><strong>Get VM Details</strong> - GET /vms/:name</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
http://localhost:3000/lume/vms/lume_vm\
```
```
{
"name": "lume_vm",
"state": "running",
"os": "macOS",
"cpu": 2,
"memory": "4GB",
"diskSize": "64GB"
}
```
</details>
<details open>
<summary><strong>Update VM Settings</strong> - PATCH /vms/:name</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X PATCH \
-H "Content-Type: application/json" \
-d '{
"cpu": 4,
"memory": "8GB",
"diskSize": "128GB"
}' \
http://localhost:3000/lume/vms/my-vm-name
```
</details>
<details open>
<summary><strong>Stop VM</strong> - POST /vms/:name/stop</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
http://localhost:3000/lume/vms/my-vm-name/stop
```
</details>
<details open>
<summary><strong>Delete VM</strong> - DELETE /vms/:name</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X DELETE \
http://localhost:3000/lume/vms/my-vm-name
```
</details>
<details open>
<summary><strong>Pull Image</strong> - POST /pull</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"image": "macos-sequoia-vanilla:latest",
"name": "my-vm-name",
"registry": "ghcr.io",
"organization": "trycua"
}' \
http://localhost:3000/lume/pull
```
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"image": "macos-sequoia-vanilla:15.2",
"name": "macos-sequoia-vanilla"
}' \
http://localhost:3000/lume/pull
```
</details>
<details open>
<summary><strong>Clone VM</strong> - POST /vms/:name/clone</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
-H "Content-Type: application/json" \
-d '{
"name": "source-vm",
"newName": "cloned-vm"
}' \
http://localhost:3000/lume/vms/source-vm/clone
```
</details>
<details open>
<summary><strong>Get Latest IPSW URL</strong> - GET /ipsw</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
http://localhost:3000/lume/ipsw
```
</details>
<details open>
<summary><strong>List Images</strong> - GET /images</summary>
```bash
# List images with default organization (trycua)
curl --connect-timeout 6000 \
--max-time 5000 \
http://localhost:3000/lume/images
```
```json
{
"local": [
"macos-sequoia-xcode:latest",
"macos-sequoia-vanilla:latest"
]
}
```
</details>
<details open>
<summary><strong>Prune Images</strong> - POST /lume/prune</summary>
```bash
curl --connect-timeout 6000 \
--max-time 5000 \
-X POST \
http://localhost:3000/lume/prune
```
</details> | {
"source": "trycua/lume",
"title": "docs/API-Reference.md",
"url": "https://github.com/trycua/lume/blob/main/docs/API-Reference.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",
"file_size": 4243
} |
# Development Guide
This guide will help you set up your development environment and understand the process for contributing code to lume.
## Environment Setup
Lume development requires:
- Swift 6 or higher
- Xcode 15 or higher
- macOS Sequoia 15.2 or higher
- (Optional) VS Code with Swift extension
## Setting Up the Repository Locally
1. **Fork the Repository**: Create your own fork of lume
2. **Clone the Repository**:
```bash
git clone https://github.com/trycua/lume.git
cd lume
```
3. **Install Dependencies**:
```bash
swift package resolve
```
4. **Build the Project**:
```bash
swift build
```
## Development Workflow
1. Create a new branch for your changes
2. Make your changes
3. Run the tests: `swift test`
4. Build and test your changes locally
5. Commit your changes with clear commit messages
## Submitting Pull Requests
1. Push your changes to your fork
2. Open a Pull Request with:
- A clear title and description
- Reference to any related issues
- Screenshots or logs if relevant
3. Respond to any feedback from maintainers | {
"source": "trycua/lume",
"title": "docs/Development.md",
"url": "https://github.com/trycua/lume/blob/main/docs/Development.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",
"file_size": 1090
} |
# FAQs
### Where are the VMs stored?
VMs are stored in `~/.lume`.
### How are images cached?
Images are cached in `~/.lume/cache`. When doing `lume pull <image>`, it will check if the image is already cached. If not, it will download the image and cache it, removing any older versions.
### Are VM disks taking up all the disk space?
No, macOS uses sparse files, which only allocate space as needed. For example, VM disks totaling 50 GB may only use 20 GB on disk.
### How do I get the latest macOS restore image URL?
```bash
lume ipsw
```
### How do I delete a VM?
```bash
lume delete <name>
```
### How to Install macOS from an IPSW Image
#### Create a new macOS VM using the latest supported IPSW image:
Run the following command to create a new macOS virtual machine using the latest available IPSW image:
```bash
lume create <name> --os macos --ipsw latest
```
#### Create a new macOS VM using a specific IPSW image:
To create a macOS virtual machine from an older or specific IPSW file, first download the desired IPSW (UniversalMac) from a trusted source.
Then, use the downloaded IPSW path:
```bash
lume create <name> --os macos --ipsw <downloaded_ipsw_path>
```
### How do I install a custom Linux image?
The process for creating a custom Linux image differs than macOS, with IPSW restore files not being used. You need to create a linux VM first, then mount a setup image file to the VM for the first boot.
```bash
lume create <name> --os linux
lume run <name> --mount <path-to-setup-image>
lume run <name>
``` | {
"source": "trycua/lume",
"title": "docs/FAQ.md",
"url": "https://github.com/trycua/lume/blob/main/docs/FAQ.md",
"date": "2025-01-31T15:02:49",
"stars": 2066,
"description": "A lightweight CLI and local API server to create, run and manage macOS and Linux virtual machines (VMs) natively on Apple Silicon.",
"file_size": 1541
} |
# Local File Organizer: AI File Management Run Entirely on Your Device, Privacy Assured
Tired of digital clutter? Overwhelmed by disorganized files scattered across your computer? Let AI do the heavy lifting! The Local File Organizer is your personal organizing assistant, using cutting-edge AI to bring order to your file chaos - all while respecting your privacy.
## How It Works 💡
Before:
```
/home/user/messy_documents/
├── IMG_20230515_140322.jpg
├── IMG_20230516_083045.jpg
├── IMG_20230517_192130.jpg
├── budget_2023.xlsx
├── meeting_notes_05152023.txt
├── project_proposal_draft.docx
├── random_thoughts.txt
├── recipe_chocolate_cake.pdf
├── scan0001.pdf
├── vacation_itinerary.docx
└── work_presentation.pptx
0 directories, 11 files
```
After:
```
/home/user/organized_documents/
├── Financial
│ └── 2023_Budget_Spreadsheet.xlsx
├── Food_and_Recipes
│ └── Chocolate_Cake_Recipe.pdf
├── Meetings_and_Notes
│ └── Team_Meeting_Notes_May_15_2023.txt
├── Personal
│ └── Random_Thoughts_and_Ideas.txt
├── Photos
│ ├── Cityscape_Sunset_May_17_2023.jpg
│ ├── Morning_Coffee_Shop_May_16_2023.jpg
│ └── Office_Team_Lunch_May_15_2023.jpg
├── Travel
│ └── Summer_Vacation_Itinerary_2023.docx
└── Work
├── Project_X_Proposal_Draft.docx
├── Quarterly_Sales_Report.pdf
└── Marketing_Strategy_Presentation.pptx
7 directories, 11 files
```
## Updates 🚀
**[2024/09] v0.0.2**:
* Featured by [Nexa Gallery](https://nexaai.com/gallery) and [Nexa SDK Cookbook](https://github.com/NexaAI/nexa-sdk/tree/main/examples)!
* Dry Run Mode: check sorting results before committing changes
* Silent Mode: save all logs to a txt file for quieter operation
* Added file support: `.md`, .`excel`, `.ppt`, and `.csv`
* Three sorting options: by content, by date, and by type
* The default text model is now [Llama3.2 3B](https://nexaai.com/meta/Llama3.2-3B-Instruct/gguf-q3_K_M/file)
* Improved CLI interaction experience
* Added real-time progress bar for file analysis
Please update the project by deleting the original project folder and reinstalling the requirements. Refer to the installation guide from Step 4.
## Roadmap 📅
- [ ] Copilot Mode: chat with AI to tell AI how you want to sort the file (ie. read and rename all the PDFs)
- [ ] Change models with CLI
- [ ] ebook format support
- [ ] audio file support
- [ ] video file support
- [ ] Implement best practices like Johnny Decimal
- [ ] Check file duplication
- [ ] Dockerfile for easier installation
- [ ] People from [Nexa](https://github.com/NexaAI/nexa-sdk) is helping me to make executables for macOS, Linux and Windows
## What It Does 🔍
This intelligent file organizer harnesses the power of advanced AI models, including language models (LMs) and vision-language models (VLMs), to automate the process of organizing files by:
* Scanning a specified input directory for files.
* Content Understanding:
- **Textual Analysis**: Uses the [Llama3.2 3B](https://nexaai.com/meta/Llama3.2-3B-Instruct/gguf-q3_K_M/file) to analyze and summarize text-based content, generating relevant descriptions and filenames.
- **Visual Content Analysis**: Uses the [LLaVA-v1.6](https://nexaai.com/liuhaotian/llava-v1.6-vicuna-7b/gguf-q4_0/file) , based on Vicuna-7B, to interpret visual files such as images, providing context-aware categorization and descriptions.
* Understanding the content of your files (text, images, and more) to generate relevant descriptions, folder names, and filenames.
* Organizing the files into a new directory structure based on the generated metadata.
The best part? All AI processing happens 100% on your local device using the [Nexa SDK](https://github.com/NexaAI/nexa-sdk). No internet connection required, no data leaves your computer, and no AI API is needed - keeping your files completely private and secure.
## Supported File Types 📁
- **Images:** `.png`, `.jpg`, `.jpeg`, `.gif`, `.bmp`
- **Text Files:** `.txt`, `.docx`, `.md`
- **Spreadsheets:** `.xlsx`, `.csv`
- **Presentations:** `.ppt`, `.pptx`
- **PDFs:** `.pdf`
## Prerequisites 💻
- **Operating System:** Compatible with Windows, macOS, and Linux.
- **Python Version:** Python 3.12
- **Conda:** Anaconda or Miniconda installed.
- **Git:** For cloning the repository (or you can download the code as a ZIP file).
## Installation 🛠
> For SDK installation and model-related issues, please post on [here](https://github.com/NexaAI/nexa-sdk/issues).
### 1. Install Python
Before installing the Local File Organizer, make sure you have Python installed on your system. We recommend using Python 3.12 or later.
You can download Python from [the official website]((https://www.python.org/downloads/)).
Follow the installation instructions for your operating system.
### 2. Clone the Repository
Clone this repository to your local machine using Git:
```zsh
git clone https://github.com/QiuYannnn/Local-File-Organizer.git
```
Or download the repository as a ZIP file and extract it to your desired location.
### 3. Set Up the Python Environment
Create a new Conda environment named `local_file_organizer` with Python 3.12:
```zsh
conda create --name local_file_organizer python=3.12
```
Activate the environment:
```zsh
conda activate local_file_organizer
```
### 4. Install Nexa SDK ️
#### CPU Installation
To install the CPU version of Nexa SDK, run:
```bash
pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/cpu --extra-index-url https://pypi.org/simple --no-cache-dir
```
#### GPU Installation (Metal - macOS)
For the GPU version supporting Metal (macOS), run:
```bash
CMAKE_ARGS="-DGGML_METAL=ON -DSD_METAL=ON" pip install nexaai --prefer-binary --index-url https://nexaai.github.io/nexa-sdk/whl/metal --extra-index-url https://pypi.org/simple --no-cache-dir
```
For detailed installation instructions of Nexa SDK for **CUDA** and **AMD GPU** support, please refer to the [Installation section](https://github.com/NexaAI/nexa-sdk?tab=readme-ov-file#installation) in the main README.
### 5. Install Dependencies
1. Ensure you are in the project directory:
```zsh
cd path/to/Local-File-Organizer
```
Replace `path/to/Local-File-Organizer` with the actual path where you cloned or extracted the project.
2. Install the required dependencies:
```zsh
pip install -r requirements.txt
```
**Note:** If you encounter issues with any packages, install them individually:
```zsh
pip install nexa Pillow pytesseract PyMuPDF python-docx
```
With the environment activated and dependencies installed, run the script using:
### 6. Running the Script🎉
```zsh
python main.py
```
## Notes
- **SDK Models:**
- The script uses `NexaVLMInference` and `NexaTextInference` models [usage](https://docs.nexaai.com/sdk/python-interface/gguf).
- Ensure you have access to these models and they are correctly set up.
- You may need to download model files or configure paths.
- **Dependencies:**
- **pytesseract:** Requires Tesseract OCR installed on your system.
- **macOS:** `brew install tesseract`
- **Ubuntu/Linux:** `sudo apt-get install tesseract-ocr`
- **Windows:** Download from [Tesseract OCR Windows Installer](https://github.com/UB-Mannheim/tesseract/wiki)
- **PyMuPDF (fitz):** Used for reading PDFs.
- **Processing Time:**
- Processing may take time depending on the number and size of files.
- The script uses multiprocessing to improve performance.
- **Customizing Prompts:**
- You can adjust prompts in `data_processing.py` to change how metadata is generated.
## License
This project is dual-licensed under the MIT License and Apache 2.0 License. You may choose which license you prefer to use for this project.
- See the [MIT License](LICENSE-MIT) for more details. | {
"source": "QiuYannnn/Local-File-Organizer",
"title": "README.md",
"url": "https://github.com/QiuYannnn/Local-File-Organizer/blob/main/README.md",
"date": "2024-09-21T07:55:12",
"stars": 2062,
"description": "An AI-powered file management tool that ensures privacy by organizing local texts, images. Using Llama3.2 3B and Llava v1.6 models with the Nexa SDK, it intuitively scans, restructures, and organizes files for quick, seamless access and easy retrieval.",
"file_size": 7757
} |
There are so many reasons couples choose to enjoy a romantic getaway or honeymoon in Chicago, Illinois. The (often captioned) Windy City is a Midwest hub for food, culture, and things to do. Best of all, it's inclusive to everybody. No wonder Chicago's been voted the best big city in America.
Lovers come for a fairytale romance fueled by four distinct seasons, whether fall gives a warm and fuzzy feeling or the magic of Christmas beckons. As for bach parties, the colossal count of restaurants, bars and nightclubs means a jam-packed weekend of festivities is always on the cards. There's absolutely no getting bored here, especially if you love sports.
Capturing moments on camera is a given these days, and sharing them with friends and family back home comes hand in hand. If you're out and about, strapped for time to think of a clever Bean caption, we're on hand to help. Here are 100-plus Chicago captions to accompany your snaps: simply copy and paste and get back to enjoying the ride. The city's sumptuous treats are calling your name. | {
"source": "QiuYannnn/Local-File-Organizer",
"title": "sample_data/text_files/ccc.md",
"url": "https://github.com/QiuYannnn/Local-File-Organizer/blob/main/sample_data/text_files/ccc.md",
"date": "2024-09-21T07:55:12",
"stars": 2062,
"description": "An AI-powered file management tool that ensures privacy by organizing local texts, images. Using Llama3.2 3B and Llava v1.6 models with the Nexa SDK, it intuitively scans, restructures, and organizes files for quick, seamless access and easy retrieval.",
"file_size": 1049
} |
# Flux Gym
Dead simple web UI for training FLUX LoRA **with LOW VRAM (12GB/16GB/20GB) support.**
- **Frontend:** The WebUI forked from [AI-Toolkit](https://github.com/ostris/ai-toolkit) (Gradio UI created by https://x.com/multimodalart)
- **Backend:** The Training script powered by [Kohya Scripts](https://github.com/kohya-ss/sd-scripts)
FluxGym supports 100% of Kohya sd-scripts features through an [Advanced](#advanced) tab, which is hidden by default.

---
# What is this?
1. I wanted a super simple UI for training Flux LoRAs
2. The [AI-Toolkit](https://github.com/ostris/ai-toolkit) project is great, and the gradio UI contribution by [@multimodalart](https://x.com/multimodalart) is perfect, but the project only works for 24GB VRAM.
3. [Kohya Scripts](https://github.com/kohya-ss/sd-scripts) are very flexible and powerful for training FLUX, but you need to run in terminal.
4. What if you could have the simplicity of AI-Toolkit WebUI and the flexibility of Kohya Scripts?
5. Flux Gym was born. Supports 12GB, 16GB, 20GB VRAMs, and extensible since it uses Kohya Scripts underneath.
---
# News
- September 25: Docker support + Autodownload Models (No need to manually download models when setting up) + Support custom base models (not just flux-dev but anything, just need to include in the [models.yaml](models.yaml) file.
- September 16: Added "Publish to Huggingface" + 100% Kohya sd-scripts feature support: https://x.com/cocktailpeanut/status/1835719701172756592
- September 11: Automatic Sample Image Generation + Custom Resolution: https://x.com/cocktailpeanut/status/1833881392482066638
---
# Supported Models
1. Flux1-dev
2. Flux1-dev2pro (as explained here: https://medium.com/@zhiwangshi28/why-flux-lora-so-hard-to-train-and-how-to-overcome-it-a0c70bc59eaf)
3. Flux1-schnell (Couldn't get high quality results, so not really recommended, but feel free to experiment with it)
4. More?
The models are automatically downloaded when you start training with the model selected.
You can easily add more to the supported models list by editing the [models.yaml](models.yaml) file. If you want to share some interesting base models, please send a PR.
---
# How people are using Fluxgym
Here are people using Fluxgym to locally train Lora sharing their experience:
https://pinokio.computer/item?uri=https://github.com/cocktailpeanut/fluxgym
# More Info
To learn more, check out this X thread: https://x.com/cocktailpeanut/status/1832084951115972653
# Install
## 1. One-Click Install
You can automatically install and launch everything locally with Pinokio 1-click launcher: https://pinokio.computer/item?uri=https://github.com/cocktailpeanut/fluxgym
## 2. Install Manually
First clone Fluxgym and kohya-ss/sd-scripts:
```
git clone https://github.com/cocktailpeanut/fluxgym
cd fluxgym
git clone -b sd3 https://github.com/kohya-ss/sd-scripts
```
Your folder structure will look like this:
```
/fluxgym
app.py
requirements.txt
/sd-scripts
```
Now activate a venv from the root `fluxgym` folder:
If you're on Windows:
```
python -m venv env
env\Scripts\activate
```
If your're on Linux:
```
python -m venv env
source env/bin/activate
```
This will create an `env` folder right below the `fluxgym` folder:
```
/fluxgym
app.py
requirements.txt
/sd-scripts
/env
```
Now go to the `sd-scripts` folder and install dependencies to the activated environment:
```
cd sd-scripts
pip install -r requirements.txt
```
Now come back to the root folder and install the app dependencies:
```
cd ..
pip install -r requirements.txt
```
Finally, install pytorch Nightly:
```
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
```
# Start
Go back to the root `fluxgym` folder, with the venv activated, run:
```
python app.py
```
> Make sure to have the venv activated before running `python app.py`.
>
> Windows: `env/Scripts/activate`
> Linux: `source env/bin/activate`
## 3. Install via Docker
First clone Fluxgym and kohya-ss/sd-scripts:
```
git clone https://github.com/cocktailpeanut/fluxgym
cd fluxgym
git clone -b sd3 https://github.com/kohya-ss/sd-scripts
```
Check your `user id` and `group id` and change it if it's not 1000 via `environment variables` of `PUID` and `PGID`.
You can find out what these are in linux by running the following command: `id`
Now build the image and run it via `docker-compose`:
```
docker compose up -d --build
```
Open web browser and goto the IP address of the computer/VM: http://localhost:7860
# Usage
The usage is pretty straightforward:
1. Enter the lora info
2. Upload images and caption them (using the trigger word)
3. Click "start".
That's all!

# Configuration
## Sample Images
By default fluxgym doesn't generate any sample images during training.
You can however configure Fluxgym to automatically generate sample images for every N steps. Here's what it looks like:

To turn this on, just set the two fields:
1. **Sample Image Prompts:** These prompts will be used to automatically generate images during training. If you want multiple, separate teach prompt with new line.
2. **Sample Image Every N Steps:** If your "Expected training steps" is 960 and your "Sample Image Every N Steps" is 100, the images will be generated at step 100, 200, 300, 400, 500, 600, 700, 800, 900, for EACH prompt.

## Advanced Sample Images
Thanks to the built-in syntax from [kohya/sd-scripts](https://github.com/kohya-ss/sd-scripts?tab=readme-ov-file#sample-image-generation-during-training), you can control exactly how the sample images are generated during the training phase:
Let's say the trigger word is **hrld person.** Normally you would try sample prompts like:
```
hrld person is riding a bike
hrld person is a body builder
hrld person is a rock star
```
But for every prompt you can include **advanced flags** to fully control the image generation process. For example, the `--d` flag lets you specify the SEED.
Specifying a seed means every sample image will use that exact seed, which means you can literally see the LoRA evolve. Here's an example usage:
```
hrld person is riding a bike --d 42
hrld person is a body builder --d 42
hrld person is a rock star --d 42
```
Here's what it looks like in the UI:

And here are the results:

In addition to the `--d` flag, here are other flags you can use:
- `--n`: Negative prompt up to the next option.
- `--w`: Specifies the width of the generated image.
- `--h`: Specifies the height of the generated image.
- `--d`: Specifies the seed of the generated image.
- `--l`: Specifies the CFG scale of the generated image.
- `--s`: Specifies the number of steps in the generation.
The prompt weighting such as `( )` and `[ ]` also work. (Learn more about [Attention/Emphasis](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#attentionemphasis))
## Publishing to Huggingface
1. Get your Huggingface Token from https://huggingface.co/settings/tokens
2. Enter the token in the "Huggingface Token" field and click "Login". This will save the token text in a local file named `HF_TOKEN` (All local and private).
3. Once you're logged in, you will be able to select a trained LoRA from the dropdown, edit the name if you want, and publish to Huggingface.

## Advanced
The advanced tab is automatically constructed by parsing the launch flags available to the latest version of [kohya sd-scripts](https://github.com/kohya-ss/sd-scripts). This means Fluxgym is a full fledged UI for using the Kohya script.
> By default the advanced tab is hidden. You can click the "advanced" accordion to expand it.

## Advanced Features
### Uploading Caption Files
You can also upload the caption files along with the image files. You just need to follow the convention:
1. Every caption file must be a `.txt` file.
2. Each caption file needs to have a corresponding image file that has the same name.
3. For example, if you have an image file named `img0.png`, the corresponding caption file must be `img0.txt`. | {
"source": "cocktailpeanut/fluxgym",
"title": "README.md",
"url": "https://github.com/cocktailpeanut/fluxgym/blob/main/README.md",
"date": "2024-09-05T11:25:42",
"stars": 2047,
"description": "Dead simple FLUX LoRA training UI with LOW VRAM support",
"file_size": 8279
} |
<div align="center">
<img src="./docs/images/github-cover-new.png" alt="RAG Web UI Demo">
<br />
<p>
<strong>Knowledge Base Management Based on RAG (Retrieval-Augmented Generation)</strong>
</p>
<p>
<a href="https://github.com/rag-web-ui/rag-web-ui/blob/main/LICENSE"><img src="https://img.shields.io/github/license/rag-web-ui/rag-web-ui" alt="License"></a>
<a href="#"><img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="Python"></a>
<a href="#"><img src="https://img.shields.io/badge/node-%3E%3D18-green.svg" alt="Node"></a>
<a href="#"><img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs Welcome"></a>
<a href="#"><img src="https://github.com/rag-web-ui/rag-web-ui/actions/workflows/test.yml/badge.svg" alt="CI"></a>
</p>
<p>
<a href="#features">Features</a> •
<a href="#quick-start">Quick Start</a> •
<a href="#deployment-guide">Deployment</a> •
<a href="#architecture">Architecture</a> •
<a href="#development">Development</a> •
<a href="#contributing">Contributing</a>
</p>
<h4>
<span>English</span> |
<a href="README.zh-CN.md">简体中文</a>
</h4>
</div>
## 📖 Introduction
RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology that helps build intelligent Q&A systems based on your own knowledge base. By combining document retrieval and large language models, it achieves accurate and reliable knowledge-based question answering services.
The system supports multiple **LLM** deployment options, including cloud services like **OpenAI** and **DeepSeek**, as well as local model deployment through **Ollama**, meeting privacy and cost requirements in different scenarios.
It also provides OpenAPI interfaces for convenient knowledge base access via API calls.
## ✨ Features
- 📚 **Intelligent Document Management**
- Support for multiple document formats (PDF, DOCX, Markdown, Text)
- Automatic document chunking and vectorization
- Support for async document processing and incremental updates
- 🤖 **Advanced Dialogue Engine**
- Precise retrieval and generation based on RAG
- Support for multi-turn contextual dialogue
- Support for reference citations in conversations
- 🎯 **Robust Architecture**
- Frontend-backend separation design
- Distributed file storage
- High-performance vector database: Support for ChromaDB, Qdrant with easy switching through Factory pattern
## 🖼️ Screenshots
<div align="center">
<img src="./docs/images/screenshot1.png" alt="Knowledge Base Management" width="800">
<p><em>Knowledge Base Management Dashboard</em></p>
<img src="./docs/images/screenshot2.png" alt="Chat Interface" width="800">
<p><em>Document Processing Dashboard</em></p>
<img src="./docs/images/screenshot3.png" alt="Document Processing" width="800">
<p><em>Document List</em></p>
<img src="./docs/images/screenshot4.png" alt="System Settings" width="800">
<p><em>Intelligent Chat Interface with References</em></p>
<img src="./docs/images/screenshot5.png" alt="Analytics Dashboard" width="800">
<p><em>API Key Management</em></p>
<img src="./docs/images/screenshot6.png" alt="Analytics Dashboard" width="800">
<p><em>API Reference</em></p>
</div>
## Project Flowchart
```mermaid
graph TB
%% Role Definitions
client["Caller/User"]
open_api["Open API"]
subgraph import_process["Document Ingestion Process"]
direction TB
%% File Storage and Document Processing Flow
docs["Document Input<br/>(PDF/MD/TXT/DOCX)"]
job_id["Return Job ID"]
nfs["NFS"]
subgraph async_process["Asynchronous Document Processing"]
direction TB
preprocess["Document Preprocessing<br/>(Text Extraction/Cleaning)"]
split["Text Splitting<br/>(Segmentation/Overlap)"]
subgraph embedding_process["Embedding Service"]
direction LR
embedding_api["Embedding API"] --> embedding_server["Embedding Server"]
end
store[(Vector Database)]
%% Internal Flow of Asynchronous Processing
preprocess --> split
split --> embedding_api
embedding_server --> store
end
subgraph job_query["Job Status Query"]
direction TB
job_status["Job Status<br/>(Processing/Completed/Failed)"]
end
end
%% Query Service Flow
subgraph query_process["Query Service"]
direction LR
user_history["User History"] --> query["User Query<br/>(Based on User History)"]
query --> query_embed["Query Embedding"]
query_embed --> retrieve["Vector Retrieval"]
retrieve --> rerank["Re-ranking<br/>(Cross-Encoder)"]
rerank --> context["Context Assembly"]
context --> llm["LLM Generation"]
llm --> response["Final Response"]
query -.-> rerank
end
%% Main Flow Connections
client --> |"1.Upload Document"| docs
docs --> |"2.Generate"| job_id
docs --> |"3a.Trigger"| async_process
job_id --> |"3b.Return"| client
docs --> nfs
nfs --> preprocess
%% Open API Retrieval Flow
open_api --> |"Retrieve Context"| retrieval_service["Retrieval Service"]
retrieval_service --> |"Access"| store
retrieval_service --> |"Return Context"| open_api
%% Status Query Flow
client --> |"4.Poll"| job_status
job_status --> |"5.Return Progress"| client
%% Database connects to Query Service
store --> retrieve
%% Style Definitions (Adjusted to match GitHub theme colors)
classDef process fill:#d1ecf1,stroke:#0077b6,stroke-width:1px
classDef database fill:#e2eafc,stroke:#003566,stroke-width:1px
classDef input fill:#caf0f8,stroke:#0077b6,stroke-width:1px
classDef output fill:#ffc8dd,stroke:#d00000,stroke-width:1px
classDef rerank fill:#cdb4db,stroke:#5a189a,stroke-width:1px
classDef async fill:#f8edeb,stroke:#7f5539,stroke-width:1px,stroke-dasharray: 5 5
classDef actor fill:#fefae0,stroke:#606c38,stroke-width:1px
classDef jobQuery fill:#ffedd8,stroke:#ca6702,stroke-width:1px
classDef queryProcess fill:#d8f3dc,stroke:#40916c,stroke-width:1px
classDef embeddingService fill:#ffe5d9,stroke:#9d0208,stroke-width:1px
classDef importProcess fill:#e5e5e5,stroke:#495057,stroke-width:1px
%% Applying classes to nodes
class docs,query,retrieval_service input
class preprocess,split,query_embed,retrieve,context,llm process
class store,nfs database
class response,job_id,job_status output
class rerank rerank
class async_process async
class client,open_api actor
class job_query jobQuery
style query_process fill:#d8f3dc,stroke:#40916c,stroke-width:1px
style embedding_process fill:#ffe5d9,stroke:#9d0208,stroke-width:1px
style import_process fill:#e5e5e5,stroke:#495057,stroke-width:1px
style job_query fill:#ffedd8,stroke:#ca6702,stroke-width:1px
```
## 🚀 Quick Start
### Prerequisites
- Docker & Docker Compose v2.0+
- Node.js 18+
- Python 3.9+
- 8GB+ RAM
### Installation
1. Clone the repository
```bash
git clone https://github.com/rag-web-ui/rag-web-ui.git
cd rag-web-ui
```
2. Configure environment variables
You can check the details in the configuration table below.
```bash
cp .env.example .env
```
3. Start services(development server)
```bash
docker compose up -d --build
```
### Verification
Access the following URLs after service startup:
- 🌐 Frontend UI: http://127.0.0.1.nip.io
- 📚 API Documentation: http://127.0.0.1.nip.io/redoc
- 💾 MinIO Console: http://127.0.0.1.nip.io:9001
## 🏗️ Architecture
### Backend Stack
- 🐍 **Python FastAPI**: High-performance async web framework
- 🗄️ **MySQL + ChromaDB**: Relational + Vector databases
- 📦 **MinIO**: Distributed object storage
- 🔗 **Langchain**: LLM application framework
- 🔒 **JWT + OAuth2**: Authentication
### Frontend Stack
- ⚛️ **Next.js 14**: React framework
- 📘 **TypeScript**: Type safety
- 🎨 **Tailwind CSS**: Utility-first CSS
- 🎯 **Shadcn/UI**: High-quality components
- 🤖 **Vercel AI SDK**: AI integration
## 📈 Performance Optimization
The system is optimized in the following aspects:
- ⚡️ Incremental document processing and async chunking
- 🔄 Streaming responses and real-time feedback
- 📑 Vector database performance tuning
- 🎯 Distributed task processing
## 📖 Development Guide
```bash
docker compose -f docker-compose.dev.yml up -d --build
```
## 🔧 Configuration
### Core Configuration
| Parameter | Description | Default | Required |
| --------------------------- | -------------------------- | --------- | -------- |
| MYSQL_SERVER | MySQL Server Address | localhost | ✅ |
| MYSQL_USER | MySQL Username | postgres | ✅ |
| MYSQL_PASSWORD | MySQL Password | postgres | ✅ |
| MYSQL_DATABASE | MySQL Database Name | ragwebui | ✅ |
| SECRET_KEY | JWT Secret Key | - | ✅ |
| ACCESS_TOKEN_EXPIRE_MINUTES | JWT Token Expiry (minutes) | 30 | ✅ |
### LLM Configuration
| Parameter | Description | Default | Applicable |
| ----------------- | --------------------- | ------------------------- | --------------------- |
| CHAT_PROVIDER | LLM Service Provider | openai | ✅ |
| OPENAI_API_KEY | OpenAI API Key | - | Required for OpenAI |
| OPENAI_API_BASE | OpenAI API Base URL | https://api.openai.com/v1 | Optional for OpenAI |
| OPENAI_MODEL | OpenAI Model Name | gpt-4 | Required for OpenAI |
| DEEPSEEK_API_KEY | DeepSeek API Key | - | Required for DeepSeek |
| DEEPSEEK_API_BASE | DeepSeek API Base URL | - | Required for DeepSeek |
| DEEPSEEK_MODEL | DeepSeek Model Name | - | Required for DeepSeek |
| OLLAMA_API_BASE | Ollama API Base URL | http://localhost:11434 | Required for Ollama |
| OLLAMA_MODEL | Ollama Model Name | llama2 | Required for Ollama |
### Embedding Configuration
| Parameter | Description | Default | Applicable |
| --------------------------- | -------------------------- | ---------------------- | ----------------------------- |
| EMBEDDINGS_PROVIDER | Embedding Service Provider | openai | ✅ |
| OPENAI_API_KEY | OpenAI API Key | - | Required for OpenAI Embedding |
| OPENAI_EMBEDDINGS_MODEL | OpenAI Embedding Model | text-embedding-ada-002 | Required for OpenAI Embedding |
| DASH_SCOPE_API_KEY | DashScope API Key | - | Required for DashScope |
| DASH_SCOPE_EMBEDDINGS_MODEL | DashScope Embedding Model | - | Required for DashScope |
| OLLAMA_EMBEDDINGS_MODEL | Ollama Embedding Model | deepseek-r1:7b | Required for Ollama Embedding |
### Vector Database Configuration
| Parameter | Description | Default | Applicable |
| ------------------ | --------------------------------- | --------------------- | --------------------- |
| VECTOR_STORE_TYPE | Vector Store Type | chroma | ✅ |
| CHROMA_DB_HOST | ChromaDB Server Address | localhost | Required for ChromaDB |
| CHROMA_DB_PORT | ChromaDB Port | 8000 | Required for ChromaDB |
| QDRANT_URL | Qdrant Vector Store URL | http://localhost:6333 | Required for Qdrant |
| QDRANT_PREFER_GRPC | Prefer gRPC Connection for Qdrant | true | Optional for Qdrant |
### Object Storage Configuration
| Parameter | Description | Default | Required |
| ----------------- | -------------------- | -------------- | -------- |
| MINIO_ENDPOINT | MinIO Server Address | localhost:9000 | ✅ |
| MINIO_ACCESS_KEY | MinIO Access Key | minioadmin | ✅ |
| MINIO_SECRET_KEY | MinIO Secret Key | minioadmin | ✅ |
| MINIO_BUCKET_NAME | MinIO Bucket Name | documents | ✅ |
### Other Configuration
| Parameter | Description | Default | Required |
| --------- | ---------------- | ------------- | -------- |
| TZ | Timezone Setting | Asia/Shanghai | ❌ |
## 🤝 Contributing
We welcome community contributions!
### Contribution Process
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to branch (`git push origin feature/AmazingFeature`)
5. Create a Pull Request
### Development Guidelines
- Follow [Python PEP 8](https://pep8.org/) coding standards
- Follow [Conventional Commits](https://www.conventionalcommits.org/)
### 🚧 Roadmap
- [x] Knowledge Base API Integration
- [ ] Workflow By Natural Language
- [ ] Multi-path Retrieval
- [x] Support Multiple Models
- [x] Support Multiple Vector Databases
## 🔧 Troubleshooting
For common issues and solutions, please refer to our [Troubleshooting Guide](docs/troubleshooting.md).
## 📄 License
This project is licensed under the [Apache-2.0 License](LICENSE)
## Note
This project is for learning and sharing RAG knowledge only. Please do not use it for commercial purposes. It is not ready for production use and is still under active development.
## 🙏 Acknowledgments
Thanks to these open source projects:
- [FastAPI](https://fastapi.tiangolo.com/)
- [Langchain](https://python.langchain.com/)
- [Next.js](https://nextjs.org/)
- [ChromaDB](https://www.trychroma.com/)

---
<div align="center">
If this project helps you, please consider giving it a ⭐️
</div> | {
"source": "rag-web-ui/rag-web-ui",
"title": "README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 14304
} |
<div align="center">
<img src="./docs/images/github-cover-new.png" alt="RAG Web UI">
<br />
<p>
<strong>基于 RAG (Retrieval-Augmented Generation) 的知识库管理</strong>
</p>
<p>
<a href="https://github.com/rag-web-ui/rag-web-ui/blob/main/LICENSE"><img src="https://img.shields.io/github/license/rag-web-ui/rag-web-ui" alt="License"></a>
<a href="#"><img src="https://img.shields.io/badge/python-3.9+-blue.svg" alt="Python"></a>
<a href="#"><img src="https://img.shields.io/badge/node-%3E%3D18-green.svg" alt="Node"></a>
<a href="#"><img src="https://github.com/rag-web-ui/rag-web-ui/actions/workflows/test.yml/badge.svg" alt="CI"></a>
<a href="#"><img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs Welcome"></a>
</p>
<p>
<a href="#特性">特性</a> •
<a href="#快速开始">快速开始</a> •
<a href="#部署指南">部署指南</a> •
<a href="#技术架构">技术架构</a> •
<a href="#开发指南">开发指南</a> •
<a href="#贡献指南">贡献指南</a>
</p>
<h4>
<a href="README.md">English</a> |
<span>简体中文</span>
</h4>
</div>
## 📖 简介
RAG Web UI 是一个基于 RAG (Retrieval-Augmented Generation) 技术的智能对话系统,它能够帮助构建基于自有知识库的智能问答系统。通过结合文档检索和大语言模型,实现了准确、可靠的知识问答服务。
系统支持多种大语言模型部署方式,既可以使用 OpenAI、DeepSeek 等云端服务,也支持通过 Ollama 部署本地模型,满足不同场景下的隐私和成本需求。
同时提供 OpenAPI 接口,方便用户通过 API 调用知识库。
你可以通过[RAG 教程](./docs/tutorial/README.md)来了解整个项目的实现流程。
## ✨ 特性
- 📚 **智能文档管理**
- 支持多种文档格式 (PDF、DOCX、Markdown、Text)
- 文档自动分块和向量化
- 支持异步文档、增量处理
- 🤖 **先进的对话引擎**
- 基于 RAG 的精准检索和生成
- 支持上下文多轮对话
- 支持对话中引用角标查看原文
- 🎯 **合理架构**
- 前后端分离设计
- 分布式文件存储
- 高性能向量数据库: 支持 ChromaDB、Qdrant,通过 Factory 模式,可以方便的切换向量数据库
## 🖼️ 截图
<div align="center">
<img src="./docs/images/screenshot1.png" alt="Knowledge Base Management" width="800">
<p><em>知识库管理 Dashboard</em></p>
<img src="./docs/images/screenshot2.png" alt="Chat Interface" width="800">
<p><em>文档处理 Dashboard</em></p>
<img src="./docs/images/screenshot3.png" alt="Document Processing" width="800">
<p><em>文档列表</em></p>
<img src="./docs/images/screenshot4.png" alt="System Settings" width="800">
<p><em>带引用序号的智能对话界面</em></p>
<img src="./docs/images/screenshot5.png" alt="Analytics Dashboard" width="800">
<p><em>API Key 管理</em></p>
<img src="./docs/images/screenshot6.png" alt="Analytics Dashboard" width="800">
<p><em>API 参考</em></p>
</div>
## 项目流程图
```mermaid
graph TB
%% Role Definitions
client["Caller/User"]
open_api["Open API"]
subgraph import_process["Document Ingestion Process"]
direction TB
%% File Storage and Document Processing Flow
docs["Document Input<br/>(PDF/MD/TXT/DOCX)"]
job_id["Return Job ID"]
nfs["NFS"]
subgraph async_process["Asynchronous Document Processing"]
direction TB
preprocess["Document Preprocessing<br/>(Text Extraction/Cleaning)"]
split["Text Splitting<br/>(Segmentation/Overlap)"]
subgraph embedding_process["Embedding Service"]
direction LR
embedding_api["Embedding API"] --> embedding_server["Embedding Server"]
end
store[(Vector Database)]
%% Internal Flow of Asynchronous Processing
preprocess --> split
split --> embedding_api
embedding_server --> store
end
subgraph job_query["Job Status Query"]
direction TB
job_status["Job Status<br/>(Processing/Completed/Failed)"]
end
end
%% Query Service Flow
subgraph query_process["Query Service"]
direction LR
user_history["User History"] --> query["User Query<br/>(Based on User History)"]
query --> query_embed["Query Embedding"]
query_embed --> retrieve["Vector Retrieval"]
retrieve --> rerank["Re-ranking<br/>(Cross-Encoder)"]
rerank --> context["Context Assembly"]
context --> llm["LLM Generation"]
llm --> response["Final Response"]
query -.-> rerank
end
%% Main Flow Connections
client --> |"1.Upload Document"| docs
docs --> |"2.Generate"| job_id
docs --> |"3a.Trigger"| async_process
job_id --> |"3b.Return"| client
docs --> nfs
nfs --> preprocess
%% Open API Retrieval Flow
open_api --> |"Retrieve Context"| retrieval_service["Retrieval Service"]
retrieval_service --> |"Access"| store
retrieval_service --> |"Return Context"| open_api
%% Status Query Flow
client --> |"4.Poll"| job_status
job_status --> |"5.Return Progress"| client
%% Database connects to Query Service
store --> retrieve
%% Style Definitions (Adjusted to match GitHub theme colors)
classDef process fill:#d1ecf1,stroke:#0077b6,stroke-width:1px
classDef database fill:#e2eafc,stroke:#003566,stroke-width:1px
classDef input fill:#caf0f8,stroke:#0077b6,stroke-width:1px
classDef output fill:#ffc8dd,stroke:#d00000,stroke-width:1px
classDef rerank fill:#cdb4db,stroke:#5a189a,stroke-width:1px
classDef async fill:#f8edeb,stroke:#7f5539,stroke-width:1px,stroke-dasharray: 5 5
classDef actor fill:#fefae0,stroke:#606c38,stroke-width:1px
classDef jobQuery fill:#ffedd8,stroke:#ca6702,stroke-width:1px
classDef queryProcess fill:#d8f3dc,stroke:#40916c,stroke-width:1px
classDef embeddingService fill:#ffe5d9,stroke:#9d0208,stroke-width:1px
classDef importProcess fill:#e5e5e5,stroke:#495057,stroke-width:1px
%% Applying classes to nodes
class docs,query,retrieval_service input
class preprocess,split,query_embed,retrieve,context,llm process
class store,nfs database
class response,job_id,job_status output
class rerank rerank
class async_process async
class client,open_api actor
class job_query jobQuery
style query_process fill:#d8f3dc,stroke:#40916c,stroke-width:1px
style embedding_process fill:#ffe5d9,stroke:#9d0208,stroke-width:1px
style import_process fill:#e5e5e5,stroke:#495057,stroke-width:1px
style job_query fill:#ffedd8,stroke:#ca6702,stroke-width:1px
```
## 🚀 快速开始
### 环境要求
- Docker & Docker Compose v2.0+
- Node.js 18+
- Python 3.9+
- 8GB+ RAM
### 安装步骤
1. 克隆项目
```bash
git clone https://github.com/rag-web-ui/rag-web-ui.git
cd rag-web-ui
```
2. 配置环境变量
注意配置文件中的环境,详细配置往下看配置表格~
```bash
cp .env.example .env
```
3. 启动服务(开发环境的配置)
```bash
docker compose up -d --build
```
### 验证安装
服务启动后,可以通过以下地址访问:
- 🌐 前端界面: http://127.0.0.1.nip.io
- 📚 API 文档: http://127.0.0.1.nip.io/redoc
- 💾 MinIO 控制台: http://127.0.0.1.nip.io:9001
## 🏗️ 技术架构
### 后端技术栈
- 🐍 **Python FastAPI**: 高性能异步 Web 框架
- 🗄️ **MySQL + ChromaDB**: 关系型数据库 + 向量数据库
- 📦 **MinIO**: 对象存储
- 🔗 **Langchain**: LLM 应用开发框架
- 🔒 **JWT + OAuth2**: 身份认证
### 前端技术栈
- ⚛️ **Next.js 14**: React 应用框架
- 📘 **TypeScript**: 类型安全
- 🎨 **Tailwind CSS**: 原子化 CSS
- 🎯 **Shadcn/UI**: 高质量组件库
- 🤖 **Vercel AI SDK**: AI 功能集成
## 📈 性能优化
系统在以下方面进行了性能优化:
- ⚡️ 文档增量处理和异步分块
- 🔄 流式响应和实时反馈
- 📑 向量数据库性能调优
- 🎯 分布式任务处理
## 📖 开发指南
使用 docker compose 启动开发环境,可热更新
```bash
docker compose -f docker-compose.dev.yml up -d --build
```
访问地址:http://127.0.0.1.nip.io
## 🔧 配置说明
### 核心配置项
| 配置项 | 说明 | 默认值 | 必填 |
| --------------------------- | ------------------------ | --------- | ---- |
| MYSQL_SERVER | MySQL 服务器地址 | localhost | ✅ |
| MYSQL_USER | MySQL 用户名 | postgres | ✅ |
| MYSQL_PASSWORD | MySQL 密码 | postgres | ✅ |
| MYSQL_DATABASE | MySQL 数据库名 | ragwebui | ✅ |
| SECRET_KEY | JWT 加密密钥 | - | ✅ |
| ACCESS_TOKEN_EXPIRE_MINUTES | JWT token 过期时间(分钟) | 30 | ✅ |
### LLM 配置
| 配置项 | 说明 | 默认值 | 适用场景 |
| ----------------- | --------------------- | ------------------------- | -------------------------------------- |
| CHAT_PROVIDER | LLM 服务提供商 | openai | ✅ |
| OPENAI_API_KEY | OpenAI API 密钥 | - | 使用 OpenAI 时必填 |
| OPENAI_API_BASE | OpenAI API 基础 URL | https://api.openai.com/v1 | 使用 OpenAI 时可选 |
| OPENAI_MODEL | OpenAI 模型名称 | gpt-4 | 使用 OpenAI 时必填 |
| DEEPSEEK_API_KEY | DeepSeek API 密钥 | - | 使用 DeepSeek 时必填 |
| DEEPSEEK_API_BASE | DeepSeek API 基础 URL | - | 使用 DeepSeek 时必填 |
| DEEPSEEK_MODEL | DeepSeek 模型名称 | - | 使用 DeepSeek 时必填 |
| OLLAMA_API_BASE | Ollama API 基础 URL | http://localhost:11434 | 使用 Ollama 时必填, 注意需要先拉取模型 |
| OLLAMA_MODEL | Ollama 模型名称 | - | 使用 Ollama 时必填 |
### Embedding 配置
| 配置项 | 说明 | 默认值 | 适用场景 |
| --------------------------- | ------------------------ | ---------------------- | ---------------------------- |
| EMBEDDINGS_PROVIDER | Embedding 服务提供商 | openai | ✅ |
| OPENAI_API_KEY | OpenAI API 密钥 | - | 使用 OpenAI Embedding 时必填 |
| OPENAI_EMBEDDINGS_MODEL | OpenAI Embedding 模型 | text-embedding-ada-002 | 使用 OpenAI Embedding 时必填 |
| DASH_SCOPE_API_KEY | DashScope API 密钥 | - | 使用 DashScope 时必填 |
| DASH_SCOPE_EMBEDDINGS_MODEL | DashScope Embedding 模型 | - | 使用 DashScope 时必填 |
| OLLAMA_EMBEDDINGS_MODEL | Ollama Embedding 模型 | - | 使用 Ollama Embedding 时必填 |
### 向量数据库配置
| 配置项 | 说明 | 默认值 | 适用场景 |
| ------------------ | ------------------------- | --------------------- | -------------------- |
| VECTOR_STORE_TYPE | 向量存储类型 | chroma | ✅ |
| CHROMA_DB_HOST | ChromaDB 服务器地址 | localhost | 使用 ChromaDB 时必填 |
| CHROMA_DB_PORT | ChromaDB 端口 | 8000 | 使用 ChromaDB 时必填 |
| QDRANT_URL | Qdrant 向量存储 URL | http://localhost:6333 | 使用 Qdrant 时必填 |
| QDRANT_PREFER_GRPC | Qdrant 优先使用 gRPC 连接 | true | 使用 Qdrant 时可选 |
### 对象存储配置
| 配置项 | 说明 | 默认值 | 必填 |
| ----------------- | ---------------- | -------------- | ---- |
| MINIO_ENDPOINT | MinIO 服务器地址 | localhost:9000 | ✅ |
| MINIO_ACCESS_KEY | MinIO 访问密钥 | minioadmin | ✅ |
| MINIO_SECRET_KEY | MinIO 密钥 | minioadmin | ✅ |
| MINIO_BUCKET_NAME | MinIO 存储桶名称 | documents | ✅ |
### 其他配置
| 配置项 | 说明 | 默认值 | 必填 |
| ------ | -------- | ------------- | ---- |
| TZ | 时区设置 | Asia/Shanghai | ❌ |
## 🤝 贡献指南
我们非常欢迎社区贡献!
### 贡献流程
1. Fork 本仓库
2. 创建特性分支 (`git checkout -b feature/AmazingFeature`)
3. 提交改动 (`git commit -m 'Add some AmazingFeature'`)
4. 推送到分支 (`git push origin feature/AmazingFeature`)
5. 创建 Pull Request
### 开发规范
- 遵循 [Python PEP 8](https://pep8.org/) 代码规范
- 遵循 [Conventional Commits](https://www.conventionalcommits.org/) 提交规范
### 🚧 Roadmap
- [x] 知识库 API 集成
- [ ] 自然语言工作流
- [ ] 多路召回
- [x] 支持多模型
- [x] 支持多向量数据库
- [x] 支持本地模型
## 补充
本项目仅用于学习交流 RAG ,请勿用于商业用途,不具备在生产环境使用的条件,还在持续开发中。
## 🔧 常见问题
为了方便大家使用,我们整理了常见问题和解决方案,请参考[Troubleshooting Guide](docs/troubleshooting.md)。
## 📄 许可证
本项目采用 [Apache-2.0 许可证](LICENSE)
## 🙏 致谢
感谢以下开源项目:
- [FastAPI](https://fastapi.tiangolo.com/)
- [Langchain](https://python.langchain.com/)
- [Next.js](https://nextjs.org/)
- [ChromaDB](https://www.trychroma.com/)

---
<div align="center">
如果这个项目对你有帮助,请考虑给它一个 ⭐️
</div> | {
"source": "rag-web-ui/rag-web-ui",
"title": "README.zh-CN.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/README.zh-CN.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 11954
} |
# Troubleshooting Guide
## Database Issues
### 1. Database Tables Not Found
If you encounter "table not found" or similar database errors after starting your services, follow these steps:
#### Check if MySQL is Ready
```bash
# Check MySQL container status
docker ps | grep db
# Check MySQL logs
docker logs ragwebui-db-1
```
Make sure you see messages indicating MySQL is running successfully.
#### Check Database Connection
```bash
# Connect to MySQL container
docker exec -it ragwebui-db-1 mysql -u ragwebui -p
# Enter password when prompted: ragwebui
# Then check your database
mysql> USE ragwebui;
mysql> SHOW TABLES;
```
#### Check if Migrations Were Applied
```bash
# Check backend logs for migration messages
docker logs ragwebui-backend-1
# Enter container shell to check migration history
docker exec -it ragwebui-backend-1 sh
alembic history
alembic current
# If migrations need to be applied, run:
alembic upgrade head
exit
```
### 2. Database Connection Issues
#### Environment Variables
Verify your environment variables in `.env` file:
```dotenv
DB_HOST=db
DB_USER=ragwebui
DB_PASSWORD=ragwebui
DB_NAME=ragwebui
```
#### Service Order Problems
If the backend started before MySQL was ready:
```bash
# Restart backend service
docker compose -f docker-compose.yml restart backend
```
## Container and Service Issues
### 1. Container Startup Failures
#### Check Container Status
```bash
# View all container statuses
docker ps -a
# View specific container logs
docker logs <container-id>
```
#### Port Conflicts
```bash
# Check if ports are already in use
netstat -tuln | grep <port-number>
# Alternative port checking command
lsof -i :<port-number>
```
### 2. Network Issues
#### Check Network Connectivity
```bash
# List Docker networks
docker network ls
# Inspect network
docker network inspect ragwebui_default
```
#### Container Communication
```bash
# Test network connectivity between containers
docker exec ragwebui-backend-1 ping db
```
## Application-Specific Issues
### 1. Frontend Issues
#### Static Files Not Loading
- Check if the frontend container is running
- Verify nginx configuration
- Check console for CORS errors
#### Authentication Problems
- Clear browser cache and cookies
- Verify JWT token configuration
- Check backend logs for auth errors
### 2. Backend Issues
#### API Endpoints Not Responding
```bash
# Check backend logs
docker compose -f docker-compose.yml logs backend
# Verify backend health
curl http://localhost/api/health
```
#### Memory Issues
```bash
# Check container resource usage
docker stats
# View backend memory usage
docker exec ragwebui-backend-1 ps aux
```
## Complete Reset Procedure
If you need to start fresh:
1. Stop all containers:
```bash
docker compose -f docker-compose.yml down
```
2. Remove volumes to clear database:
```bash
docker compose -f docker-compose.yml down -v
```
3. Start everything again:
```bash
docker compose -f docker-compose.yml up -d
```
4. Wait a minute for MySQL to initialize, then run migrations:
```bash
docker exec -it ragwebui-backend-1 alembic upgrade head
```
## Debugging Tools
### 1. Logging
#### View All Service Logs
```bash
docker compose -f docker-compose.yml logs
```
#### Service-Specific Logs
```bash
docker compose -f docker-compose.yml logs backend
docker compose -f docker-compose.yml logs db
docker compose -f docker-compose.yml logs frontend
```
### 2. Database Debugging
#### Connect to Database CLI
```bash
docker exec -it ragwebui-db-1 mysql -u ragwebui -p
```
#### Backup and Restore
```bash
# Create backup
docker exec ragwebui-db-1 mysqldump -u ragwebui -p ragwebui > backup.sql
# Restore from backup
docker exec -i ragwebui-db-1 mysql -u ragwebui -p ragwebui < backup.sql
```
## Need More Help?
If you're still experiencing issues:
1. Check the application logs for specific error messages
2. Verify all environment variables are correctly set
3. Ensure all required services are running
4. Check system resources (CPU, memory, disk space)
5. Review recent changes that might have caused the issue
Remember: Most services need a few seconds to initialize after starting. If you get connection errors, wait a moment and try again. | {
"source": "rag-web-ui/rag-web-ui",
"title": "docs/troubleshooting.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/docs/troubleshooting.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 4273
} |
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
## Getting Started
First, run the development server:
```bash
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
```
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
## Learn More
To learn more about Next.js, take a look at the following resources:
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
## Deploy on Vercel
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details. | {
"source": "rag-web-ui/rag-web-ui",
"title": "frontend/README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/frontend/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 1449
} |
---
name: Bug Report
about: Create a report to help us improve
title: ''
labels: 'bug'
assignees: ''
---
## 🚨 Before Creating an Issue
Please check our [Troubleshooting Guide](docs/troubleshooting.md) first. Many common issues can be resolved using the guide.
## Description
A clear and concise description of the issue.
## Steps To Reproduce
1.
2.
3.
## Expected Behavior
A clear description of what you expected to happen.
## Actual Behavior
What actually happened.
## Troubleshooting Steps Taken
Please list the steps from the troubleshooting guide you've already tried:
- [ ] Checked container status (`docker ps -a`)
- [ ] Reviewed service logs (`docker compose logs`)
- [ ] Verified database connection
- [ ] Checked environment variables
## Environment
- OS: [e.g. Ubuntu 22.04]
- Docker version: [e.g. 24.0.5]
- Docker Compose version: [e.g. 2.21.0]
## Environment Variables
Please provide your `.env` file contents (remove any sensitive information):
```env
# Replace sensitive values with ***
DB_HOST=
DB_USER=
DB_PASSWORD=*** # Do not share actual password
DB_NAME=
# Add any other environment variables you've modified
```
## Logs
Please provide relevant logs from:
```
# Add your logs here
```
## Additional Context
Add any other context about the problem here.
## Screenshots
If applicable, add screenshots to help explain your problem.
---
⚠️ **Note**: Make sure to remove any sensitive information (passwords, tokens, etc.) before submitting your issue. | {
"source": "rag-web-ui/rag-web-ui",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 1487
} |
<p align="center">
<a href="https://trychroma.com"><img src="https://user-images.githubusercontent.com/891664/227103090-6624bf7d-9524-4e05-9d2c-c28d5d451481.png" alt="Chroma logo"></a>
</p>
<p align="center">
<b>Chroma - the open-source embedding database</b>. <br />
The fastest way to build Python or JavaScript LLM apps with memory!
</p>
<p align="center">
<a href="https://discord.gg/MMeYNTmh3x" target="_blank">
<img src="https://img.shields.io/discord/1073293645303795742?cacheSeconds=3600" alt="Discord">
</a> |
<a href="https://github.com/chroma-core/chroma/blob/master/LICENSE" target="_blank">
<img src="https://img.shields.io/static/v1?label=license&message=Apache 2.0&color=white" alt="License">
</a> |
<a href="https://docs.trychroma.com/" target="_blank">
Docs
</a> |
<a href="https://www.trychroma.com/" target="_blank">
Homepage
</a>
</p>
```bash
pip install chromadb # python client
# for javascript, npm install chromadb!
# for client-server mode, chroma run --path /chroma_db_path
```
The core API is only 4 functions (run our [💡 Google Colab](https://colab.research.google.com/drive/1QEzFyqnoFxq7LUGyP1vzR4iLt9PpCDXv?usp=sharing) or [Replit template](https://replit.com/@swyx/BasicChromaStarter?v=1)):
```python
import chromadb
# setup Chroma in-memory, for easy prototyping. Can add persistence easily!
client = chromadb.Client()
# Create collection. get_collection, get_or_create_collection, delete_collection also available!
collection = client.create_collection("all-my-documents")
# Add docs to the collection. Can also update and delete. Row-based API coming soon!
collection.add(
documents=["This is document1", "This is document2"], # we handle tokenization, embedding, and indexing automatically. You can skip that and add your own embeddings as well
metadatas=[{"source": "notion"}, {"source": "google-docs"}], # filter on these!
ids=["doc1", "doc2"], # unique for each doc
)
# Query/search 2 most similar results. You can also .get by id
results = collection.query(
query_texts=["This is a query document"],
n_results=2,
# where={"metadata_field": "is_equal_to_this"}, # optional filter
# where_document={"$contains":"search_string"} # optional filter
)
```
## Features
- __Simple__: Fully-typed, fully-tested, fully-documented == happiness
- __Integrations__: [`🦜️🔗 LangChain`](https://blog.langchain.dev/langchain-chroma/) (python and js), [`🦙 LlamaIndex`](https://twitter.com/atroyn/status/1628557389762007040) and more soon
- __Dev, Test, Prod__: the same API that runs in your python notebook, scales to your cluster
- __Feature-rich__: Queries, filtering, density estimation and more
- __Free & Open Source__: Apache 2.0 Licensed
## Use case: ChatGPT for ______
For example, the `"Chat your data"` use case:
1. Add documents to your database. You can pass in your own embeddings, embedding function, or let Chroma embed them for you.
2. Query relevant documents with natural language.
3. Compose documents into the context window of an LLM like `GPT3` for additional summarization or analysis.
## Embeddings?
What are embeddings?
- [Read the guide from OpenAI](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
- __Literal__: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 => `[1.2, 2.1, ....]`. This process makes documents "understandable" to a machine learning model.
- __By analogy__: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
- __Technical__: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
- __A small example__: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.
Embeddings databases (also known as **vector databases**) store embeddings and allow you to search by nearest neighbors rather than by substrings like a traditional database. By default, Chroma uses [Sentence Transformers](https://docs.trychroma.com/guides/embeddings#default:-all-minilm-l6-v2) to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.
## Get involved
Chroma is a rapidly developing project. We welcome PR contributors and ideas for how to improve the project.
- [Join the conversation on Discord](https://discord.gg/MMeYNTmh3x) - `#contributing` channel
- [Review the 🛣️ Roadmap and contribute your ideas](https://docs.trychroma.com/roadmap)
- [Grab an issue and open a PR](https://github.com/chroma-core/chroma/issues) - [`Good first issue tag`](https://github.com/chroma-core/chroma/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22)
- [Read our contributing guide](https://docs.trychroma.com/contributing)
**Release Cadence**
We currently release new tagged versions of the `pypi` and `npm` packages on Mondays. Hotfixes go out at any time during the week.
## License
[Apache 2.0](./LICENSE) | {
"source": "rag-web-ui/rag-web-ui",
"title": "backend/uploads/README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/backend/uploads/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 5227
} |
# 🚀 十分钟搭建属于自己的 DeepSeek 知识库!完全开源、离线部署方案详解
## 💡 序言
还在为高额的 ChatGPT Plus 订阅费用发愁吗?担心公司机密文档上传到云端吗?本教程将带你使用完全开源的工具,在本地搭建一个基于 RAG (Retrieval-Augmented Generation) 技术的智能知识库系统。不仅完全离线,还能保护隐私,让你的文档秘密更有保障!
## 🛠️ 环境准备
在开始之前,请确保你的系统满足以下要求:
- 操作系统:Linux/macOS/Windows
- RAM:至少 8GB (推荐 16GB 以上)
- 硬盘空间:至少 20GB 可用空间
- 已安装:
- [Docker & Docker Compose v2.0+](https://docs.docker.com/get-docker/)
- [Ollama](https://ollama.com/)
### 1. 安装 Ollama
1. 访问 [Ollama 官网](https://ollama.com/) 下载并安装对应系统版本
2. 验证安装:
````bash
ollama --version
````
### 2. 下载必要的模型
我们需要两个模型:
- deepseek-r1:7b 用于对话生成
- nomic-embed-text 用于文本向量化
执行以下命令下载模型:
````bash
# 下载对话模型
ollama pull deepseek-r1:7b
# 下载向量模型
ollama pull nomic-embed-text
````
## 🔧 部署知识库系统
### 1. 克隆项目
````bash
git clone https://github.com/rag-web-ui/rag-web-ui.git
cd rag-web-ui
````
### 2. 配置环境变量
复制环境变量模板并编辑:
````bash
cp .env.example .env
````
编辑 .env 文件,配置如下:
````env
# LLM 配置
CHAT_PROVIDER=ollama
OLLAMA_API_BASE=http://host.docker.internal:11434
OLLAMA_MODEL=deepseek-r1:7b
# Embedding 配置
EMBEDDINGS_PROVIDER=ollama
OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
# 向量数据库配置
VECTOR_STORE_TYPE=chroma
CHROMA_DB_HOST=chromadb
CHROMA_DB_PORT=8000
# MySQL 配置
MYSQL_SERVER=db
MYSQL_USER=ragwebui
MYSQL_PASSWORD=ragwebui
MYSQL_DATABASE=ragwebui
# MinIO 配置
MINIO_ENDPOINT=minio:9000
MINIO_ACCESS_KEY=minioadmin
MINIO_SECRET_KEY=minioadmin
MINIO_BUCKET_NAME=documents
````
注意:这里使用的是 Docker Compose 的服务名而不是 localhost,这样容器之间才能正确通信。
### 3. 启动服务
使用 Docker Compose 启动所有服务:
````bash
docker compose up -d --build
````
这将启动以下服务:
- 前端界面 (Next.js)
- 后端 API (FastAPI)
- MySQL 数据库
- ChromaDB 向量数据库
- MinIO 对象存储
- Ollama 服务
### 4. 验证部署
服务启动后,可以通过以下地址访问:
- 前端界面:<http://localhost:3000>
- API 文档:<http://localhost:8000/redoc>
- MinIO 控制台:<http://localhost:9001>
## 📚 使用指南
### 1. 创建知识库
1. 访问 <http://localhost:3000>
2. 登录后,点击"创建知识库"
3. 填写知识库名称和描述
4. 上传文档,选择切片方式和大小
5. 点击"创建"
6. 等待文档处理完成
支持以下格式:
- PDF
- DOCX
- Markdown
- Text
- ...
### 2. 开始对话
1. 点击"开始对话"
2. 输入问题
3. 系统会自动:
- 检索相关文档片段
- 使用 deepseek-r1:7b 模型生成回答
- 显示引用来源
## ❓ 常见问题
1. Ollama 服务无法连接
- 检查 Ollama 是否正常运行:`ollama list`
- 检查 Docker 网络配置是否正确
2. 文档处理失败
- 检查文档格式是否支持
- 查看后端日志:`docker compose logs -f backend`
3. 内存不足
- 调整 Docker 容器内存限制
- 考虑使用更小的模型
> 💡 性能与安全提示:建议单个文档不超过 10MB,定期备份数据,并及时修改默认密码以确保系统安全。
## 🎯 结语
通过以上步骤,你已经成功搭建了一个基于 RAG 技术的本地知识库系统。该系统完全本地化部署,无需担心数据隐私问题,同时借助 Ollama 的能力,可以实现高质量的知识问答服务。
需要注意的是,这个系统主要用于学习和个人使用,如果要用于生产环境,还需要进行更多的安全性和稳定性优化。
## 📚 参考资源
- [Ollama 官方文档](https://ollama.com/)
- [RAG Web UI 项目](https://github.com/rag-web-ui/rag-web-ui)
- [Docker 文档](https://docs.docker.com/)
希望这个教程对你搭建个人知识库有所帮助!如果遇到问题,欢迎查阅项目文档或在 GitHub 上提出 issue。 | {
"source": "rag-web-ui/rag-web-ui",
"title": "docs/blog/deploy-local.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/docs/blog/deploy-local.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 2684
} |
# 零基础入门:如何用 RAG (检索增强生成) 打造知识库 QA 系统
## 写在前面
马上今年要过去了,这个项目是在 2025 年 1 月份闲暇时间发起一个类似于教育类的项目。
其目的更多是希望可以在不依赖其他大的基础设施,结合自己多个 RAG 项目的经验,
用大家手头上已有的工具,通过跑通一个全流程的 RAG 知识库项目,来帮助更多的同学认识和入门 RAG 和知识库。
所以在这个项目里面,你当前还不会看到很多关于 RAG 的细节,例如多路召回、HyDE、Query 改写等能力(当然,我看到社区里面有能力的同学已经在帮忙实现这些能力 ING 了)。
项目流程图:
```mermaid
graph TB
%% Role Definitions
client["Caller/User"]
open_api["Open API"]
subgraph import_process["Document Ingestion Process"]
direction TB
%% File Storage and Document Processing Flow
docs["Document Input<br/>(PDF/MD/TXT/DOCX)"]
job_id["Return Job ID"]
nfs["NFS"]
subgraph async_process["Asynchronous Document Processing"]
direction TB
preprocess["Document Preprocessing<br/>(Text Extraction/Cleaning)"]
split["Text Splitting<br/>(Segmentation/Overlap)"]
subgraph embedding_process["Embedding Service"]
direction LR
embedding_api["Embedding API"] --> embedding_server["Embedding Server"]
end
store[(Vector Database)]
%% Internal Flow of Asynchronous Processing
preprocess --> split
split --> embedding_api
embedding_server --> store
end
subgraph job_query["Job Status Query"]
direction TB
job_status["Job Status<br/>(Processing/Completed/Failed)"]
end
end
%% Query Service Flow
subgraph query_process["Query Service"]
direction LR
user_history["User History"] --> query["User Query<br/>(Based on User History)"]
query --> query_embed["Query Embedding"]
query_embed --> retrieve["Vector Retrieval"]
retrieve --> rerank["Re-ranking<br/>(Cross-Encoder)"]
rerank --> context["Context Assembly"]
context --> llm["LLM Generation"]
llm --> response["Final Response"]
query -.-> rerank
end
%% Main Flow Connections
client --> |"1.Upload Document"| docs
docs --> |"2.Generate"| job_id
docs --> |"3a.Trigger"| async_process
job_id --> |"3b.Return"| client
docs --> nfs
nfs --> preprocess
%% Open API Retrieval Flow
open_api --> |"Retrieve Context"| retrieval_service["Retrieval Service"]
retrieval_service --> |"Access"| store
retrieval_service --> |"Return Context"| open_api
%% Status Query Flow
client --> |"4.Poll"| job_status
job_status --> |"5.Return Progress"| client
%% Database connects to Query Service
store --> retrieve
%% Style Definitions (Adjusted to match GitHub theme colors)
classDef process fill:#d1ecf1,stroke:#0077b6,stroke-width:1px
classDef database fill:#e2eafc,stroke:#003566,stroke-width:1px
classDef input fill:#caf0f8,stroke:#0077b6,stroke-width:1px
classDef output fill:#ffc8dd,stroke:#d00000,stroke-width:1px
classDef rerank fill:#cdb4db,stroke:#5a189a,stroke-width:1px
classDef async fill:#f8edeb,stroke:#7f5539,stroke-width:1px,stroke-dasharray: 5 5
classDef actor fill:#fefae0,stroke:#606c38,stroke-width:1px
classDef jobQuery fill:#ffedd8,stroke:#ca6702,stroke-width:1px
classDef queryProcess fill:#d8f3dc,stroke:#40916c,stroke-width:1px
classDef embeddingService fill:#ffe5d9,stroke:#9d0208,stroke-width:1px
classDef importProcess fill:#e5e5e5,stroke:#495057,stroke-width:1px
%% Applying classes to nodes
class docs,query,retrieval_service input
class preprocess,split,query_embed,retrieve,context,llm process
class store,nfs database
class response,job_id,job_status output
class rerank rerank
class async_process async
class client,open_api actor
class job_query jobQuery
style query_process fill:#d8f3dc,stroke:#40916c,stroke-width:1px
style embedding_process fill:#ffe5d9,stroke:#9d0208,stroke-width:1px
style import_process fill:#e5e5e5,stroke:#495057,stroke-width:1px
style job_query fill:#ffedd8,stroke:#ca6702,stroke-width:1px
```
## 1. 认识 RAG:为什么要"检索 + 生成"
### 1.1 什么是 RAG
RAG 是 Retrieval-Augmented Generation 的缩写,中文翻译为"检索增强生成"。它是一种将检索系统和生成式 AI 模型结合的技术方案,主要包含两个核心步骤:
1. 检索(Retrieval):根据用户输入的问题,从知识库中检索出相关的文档或信息片段
2. 生成(Generation):将检索到的相关信息作为上下文,结合用户问题,让大语言模型生成准确的回答
这种方案既能让模型基于最新的知识作答,又可以提供可溯源的参考依据,有效解决了大语言模型的知识时效性和事实准确性问题。
下面这张图展示了 RAG 在对话过程中的工作流程:
```mermaid
flowchart TD
User["用户: 问题"] --> Retrieval["检索模块"]
KB["知识库"] --> Doc1["相关文档1"]
KB --> Doc2["相关文档2"]
Retrieval --> Doc1
Retrieval --> Doc2
Doc1 --> LLM["大语言模型"]
Doc2 --> LLM
LLM --> Answer["助手: 生成的回答"]
style User fill:#f9f,stroke:#333
style KB fill:#bbf,stroke:#333
style LLM fill:#bfb,stroke:#333
style Answer fill:#fbb,stroke:#333
```
### 1.2 为什么需要 RAG
让我们对比三种问答方案的优缺点,来理解为什么 RAG 是一个更好的选择:
1. 传统检索式问答 (Retrieval QA)
- ✅ 可靠性高:答案直接来自知识库,有明确的来源
- ✅ 知识可更新:添加新文档即可更新知识
- ❌ 灵活性差:只能返回知识库中已有的内容
- ❌ 表达生硬:难以用自然语言组织答案
2. 纯 LLM 问答
- ✅ 表达自然:能用流畅的语言组织答案
- ✅ 灵活理解:可以理解各种表达方式的问题
- ❌ 知识固化:知识仅限于训练数据,无法及时更新
- ❌ 可靠性差:容易产生幻觉,难以验证答案准确性
3. RAG 方案
- ✅ 可靠且可溯源:答案基于检索到的具体文档
- ✅ 知识可更新:可以持续添加新的知识
- ✅ 表达自然:利用 LLM 的语言能力组织答案
- ✅ 灵活理解:能理解各种形式的问题
- ✅ 成本可控:主要消耗在必要的 API 调用上
RAG 通过将检索和生成相结合,既保留了传统检索问答的可靠性,又获得了 LLM 的灵活性和自然表达能力。它能让 AI 始终基于最新的、可信的知识来回答问题,同时保持对话的流畅自然。
RAG 的典型应用场景
- 企业知识库问答:帮助企业构建对内员工知识库或对外客户问答系统。
- 法律法规、论文等参考场景:需要给出权威来源或证据的回答。
- 任何需要"带有引用信息"的回答场景。
## 2. RAG 系统整体架构与数据流
### 2.1 核心组件
- 向量数据库:用来存储文档分块后的向量(如 ChromaDB、Qdrant)。
- Embedding(文本向量化):将文本转化为可比较的数值向量,形如 [0.1, 0.2, 0.3, 0.4, 0.5]
- 检索 (Retrieval):根据用户查询的向量相似度,检索出最相关的文档切片。
- 大语言模型:将检索到的上下文与用户问题组合,再由模型 (LLM) 生成最终答案。
- 生成 (Generation) 与引用:如何在回答中嵌入引用链接或标注,方便用户溯源。
### 2.2 RAG 的典型工作流
1) 用户输入问题。
2) 将问题向量化,然后检索最相似的文档切片。
3) 将检索到的上下文与问题拼接后输入 LLM。
4) LLM 输出带引用信息的回答。
5) 前端渲染回答、可选地在可视化界面中展示引用详情。
下面用一张图展示各个组件的交互流程:
```mermaid
flowchart TD
User["用户问题"] --> Embedding["文本向量化\n(Embedding)"]
DB[(知识库\n向量数据库)] --> Retrieval
Embedding --> Retrieval["检索模块\n(相似度匹配)"]
Retrieval --> Context["相关文档片段"]
Context --> Assembly["上下文组装"]
Assembly --> LLM["大语言模型"]
LLM --> Answer["带引用的回答"]
Answer --> Frontend["前端展示\n- 回答内容\n- 引用来源\n- 相关度"]
style User fill:#f9f,stroke:#333
style DB fill:#bbf,stroke:#333
style LLM fill:#bfb,stroke:#333
style Frontend fill:#fbb,stroke:#333
```
## 3. 构建知识库:文档处理、嵌入、存储
### 3.1 文档上传与分块 (Chunking)
#### 3.1.1 为什么要对文档进行分块?
文档分块是 RAG 系统中的一个关键步骤,主要有以下几个原因:
1. 向量相似度计算的精度
- 过长的文本会导致向量表示不够精确
- 较小的文本块能更好地捕捉局部语义
- 有助于提高检索的准确性
2. LLM 的上下文窗口限制
- LLM 的输入长度是有限的 (虽然 Qwen 已经推出了 1M token 的上下文窗口 0.0)
- 需要将文档切分为适合 LLM 处理的大小
- 避免超出 token 限制导致信息丢失
3. 检索效率与成本
- 更小的文本块便于建立细粒度的索引
- 只需检索最相关的片段,节省 token 用量
- 减少无关信息,提高回答质量
4. 引用与溯源 (这个是 RAG 的特色功能)
- 便于定位信息的具体来源
- 可以给出更精确的引用范围
- 有助于用户验证答案的可靠性
#### 3.1.2 常见的分块策略
1. 固定长度分块
- 按字符数或 token 数进行切分
- 实现简单,但可能切断语义完整的段落
- 适合结构统一的文档
2. 语义分块
- 按段落、章节等自然语义单位切分
- 保持上下文的连贯性
- 需要考虑文档的具体结构
3. 重叠分块
- 相邻块之间保留一定重叠
- 避免关键信息被切分
- 增加了存储和计算开销
4. 递归分块
- 先大块后细分
- 保持层次结构
- 适合长文档处理
选择合适的分块策略需要考虑:
- 文档的类型和结构
- 向量数据库的特性
- LLM 的上下文窗口大小
- 检索效率与成本的平衡
例如如果是 markdown,可以按段落进行分块,如果是一般文档,可以按章节进行分块。
```text
+--------------------------------------------------+
| # Chapter 1 Title |
| Main content... |
| Main content... |
| |
| ## 1.1 Section Title |
| - List item 1 |
| - List item 2 |
| |
| ### 1.1.1 Subsection Title |
| Main paragraph... |
| |
| # Chapter 2 Title |
| Another paragraph... |
+--------------------------------------------------+
|
v
Chunking 切片
|
v
+------------------+ +-------------------+ +------------------+
| Chunk 1: | | Chunk 2: | | Chunk 3: |
| # Chapter 1 | | ## 1.1 Section | | # Chapter 2 |
| Title | | Title | | Title |
| Main content... | | - List item 1 | | Another |
| Main content... | | - List item 2 | | paragraph... |
+------------------+ | | +------------------+
| ### 1.1.1 |
| Subsection Title |
| Main paragraph... |
+-------------------+
```
### 3.2 文本向量化 (Embedding)
文本向量化是将自然语言文本转换为高维向量空间中的数值向量的过程。这种转换使得我们可以:
- 用数学方法计算文本之间的语义相似度
- 在向量空间中进行高效的相似度搜索
- 保留文本的语义信息和上下文关系
常用的文本向量化模型包括:
1. OpenAI Embeddings
- text-embedding-ada-002 模型
- 1536 维向量输出
- 适用于英文等多种语言
- 语义表达能力强
2. Sentence Transformers
- 开源的句子级别编码器
- 支持多语言
- 可以根据场景微调
- 计算效率高
在 RAG Web UI 中,主要是用的 OpenAI 的 text-embedding-ada-002 模型。
```python
from langchain_openai import OpenAIEmbeddings
...
embeddings = OpenAIEmbeddings(
openai_api_key=settings.OPENAI_API_KEY,
openai_api_base=settings.OPENAI_API_BASE
)
```
### 3.3 向量数据库
在文本 Embedding 之后,需要将向量存储到向量数据库中,以便后续的检索和相似度计算。
在 RAG Web UI 中,主要是用的 ChromaDB 作为向量数据库, 同时支持使用 Factory 模式, 支持多种向量数据库,例如:
1. ChromaDB
2. Qdrant
3. Milvus
4. Faiss
5. Annoy
6. Pinecone
7. Zilliz
向量数据库除了存储向量,还要携带某些元信息(文档来源、段落位置等)方便查阅, 一般情况下,我们会存入这样的数据结构到向量数据库中:
除了向量之外, 我们还需要存入一些元数据, 例如:
```python
{
"id": "chunk_id",
"text": "段落内容",
"metadata": {"source": "文档来源", "position": "段落位置", "hash": "段落哈希值"}
}
```
## 4. 检索与重排序:用最相关的上下文喂给大模型
### 4.1 相似度检索 (Similarity Search)
常用的相似度度量:余弦相似度、向量距离 (欧几里得距离) 等。
ChromaDB 支持多种相似度计算方法:
1. Cosine Similarity (余弦相似度)
- 计算两个向量夹角的余弦值
- 值域范围为 [-1,1],越接近 1 表示越相似
- 不受向量长度影响,只关注方向
- 计算公式: cos(θ) = (A·B)/(||A||·||B||)
2. L2 Distance (欧氏距离)
- 计算两个向量间的直线距离
- 值越小表示越相似
- 受向量长度影响
- 计算公式: d = √(Σ(ai-bi)²)
3. IP (Inner Product, 内积)
- 两个向量对应位置相乘后求和
- 值越大表示越相似
- 受向量长度影响
- 计算公式: IP = Σ(ai×bi)
ChromaDB 默认使用 Cosine Similarity,这也是最常用的相似度计算方法,因为:
- 计算简单高效
- 不受向量绝对大小影响
- 对于文本语义相似度计算效果好
- 结果容易解释和标准化
在实际使用中,可以根据具体场景选择合适的相似度算法:
- 如果向量已归一化,三种方法等价
- 对向量长度敏感时选择 Cosine
- 关注绝对距离时选择 L2
- 需要快速计算时可用 IP
### 4.2 重排序 (Re-ranking) 重要吗?
重排序是一个重要的步骤,可以显著提升检索结果的质量。其工作原理如下:
1. 初步检索
- 首先使用向量相似度搜索召回一批候选文档(如前20-100条)
- 这一步计算快速但可能不够精确
2. Cross-Encoder 重排序
- 对召回的候选文档进行更精细的相关性打分
- Cross-Encoder 会同时看到 query 和文档内容,计算它们的匹配度
- 相比向量相似度,能更好地理解语义关联
- 但计算开销较大,所以只用于重排少量候选
3. 应用场景
- 多路召回:不同检索方式召回的结果需要统一排序
- 高精度要求:需要更准确的相关性排序
- 复杂查询:简单向量相似度可能不足以理解查询意图
4. 常见实现
- 使用预训练的 Cross-Encoder 模型(如 BERT)
- 可以针对具体任务进行微调
- 输出相关性分数用于重新排序
虽然重排序会增加一定延迟,但在对准确度要求较高的场景下,这个成本通常是值得的。
### 4.3 拼接上下文与用户问题
在检索到相关文档片段后,需要将它们与用户问题拼接成合适的 prompt,以供 LLM 生成回答。
用户问题 + 检索到的上下文 = Prompt,最终由 LLM 输出回答。
以下是一些常见的拼接策略:
1. 基本结构
- System: 系统指令,说明 AI 助手的角色和任务
- Context: 检索到的相关文档片段
- Human: 用户的实际问题
- Assistant: AI 的回答
2. 拼接技巧
我们在项目中做了一个有意思的事情,就是可以使用 `[[citation:1]]` 这样的格式来引用检索到的上下文。
然后用户可以在前端通过 Markdown 的格式来展示引用信息, 并且通过弹窗来展示引用详情。

在 RAG Web UI 中, 我们使用 LangChain 的模板来实现这个功能:
可查阅: `backend/app/services/chat_service.py`
```python
from langchain.prompts import PromptTemplate
qa_system_prompt = (
"You are given a user question, and please write clean, concise and accurate answer to the question. "
"You will be given a set of related contexts to the question, which are numbered sequentially starting from 1. "
"Each context has an implicit reference number based on its position in the array (first context is 1, second is 2, etc.). "
"Please use these contexts and cite them using the format [citation:x] at the end of each sentence where applicable. "
"Your answer must be correct, accurate and written by an expert using an unbiased and professional tone. "
"Please limit to 1024 tokens. Do not give any information that is not related to the question, and do not repeat. "
"Say 'information is missing on' followed by the related topic, if the given context do not provide sufficient information. "
"If a sentence draws from multiple contexts, please list all applicable citations, like [citation:1][citation:2]. "
"Other than code and specific names and citations, your answer must be written in the same language as the question. "
"Be concise.\n\nContext: {context}\n\n"
"Remember: Cite contexts by their position number (1 for first context, 2 for second, etc.) and don't blindly "
"repeat the contexts verbatim."
)
```
## 5. 工程实战示例:RAG 在知识库 QA 中的流程
理论的事情,相信大家都了解了,相信大家也看过不少的文章,但是可能没有真正动手实践过,或者项目太复杂无从下手,或是没有一个完整的项目可以参考。
在工程的实践中,去掉那些花里胡哨的东西, 直接上代码,直接上手实践,才是这个项目的意义所在。
这个项目中,用的都是目前最为流行的技术栈, 例如:
- 前端:React(Nextjs) + TailwindCSS + AI SDK
- 后端:FastAPI + LangChain + ChromaDB/Qdrant + MySQL + MinIO
- 部署:Docker + Docker Compose
让我们通过一个完整的工程实现示例,来理解 RAG 在知识库问答中的具体应用流程。我们将按照数据流的顺序,逐步解析关键代码的实现。
### 5.1 文档上传 → 异步处理
详细代码可以参考: `backend/app/services/document_processor.py`

从上面的系统架构图中可以看到,文档上传和处理的流程如下:
```mermaid
sequenceDiagram
participant Client
participant API
participant NFS
participant Queue
participant Worker
participant VectorDB
Client->>API: 上传文档
API->>NFS: 存储文档
API->>Queue: 创建处理任务
API-->>Client: 返回 Job ID
loop 状态查询
Client->>API: 查询进度 (Job ID)
API-->>Client: 返回处理状态
end
Queue->>Worker: 分发任务
Worker->>NFS: 读取文档
Worker->>Worker: 文本提取
Worker->>Worker: 文本分块
Worker->>Worker: 向量化处理
Worker->>VectorDB: 存储向量数据
Worker->>Queue: 更新任务状态
```
1. 用户上传文档 (PDF/MD/TXT/DOCX)
- 客户端发起文档上传请求
- 文档被临时存储到 NFS (Network File System)
- 系统生成并返回一个 Job ID 给客户端
2. 异步处理流程启动
- 文档预处理:提取文本、清洗数据
- 文本分块:按照设定的策略进行分段
- 向量化:通过 Embedding 服务将文本转换为向量
- 存储:将向量数据保存到向量数据库
3. 状态查询
- 客户端通过 Job ID 轮询任务状态
- 系统返回当前进度 (Processing/Completed/Failed)
这种异步处理的设计有以下优点:
- 支持大文件处理:不会因处理时间过长导致请求超时
- 提升用户体验:用户可以实时查看处理进度
- 系统解耦:文档处理与存储服务可以独立扩展
- 错误处理:失败任务可以重试,不影响其他上传
在代码实现中,主要涉及以下几个关键组件:
1. 文件上传接口
2. 任务队列系统
3. 异步处理服务
4. 状态查询接口
这种设计让整个文档处理流程更加健壮和可扩展。
当然这里也设计也有设计到一些小细节,例如在处理文档的时候,可能很多系统都会选择先删后增,但是这样会导致向量数据库中的数据被删除,从而导致检索结果不准确。所以我们这里会通过一个临时表来实现这个功能,确保新的文件被处理后,旧的文件才被删除。
### 5.2 用户提问 → 检索 + LLM 生成
代码可查阅: `backend/app/services/chat_service.py`
从前端使用 AI SDK 发送到后台,后台接口接收后会进行,用户 Query 的处理流程如下:
```mermaid
sequenceDiagram
actor User
participant Frontend
participant Backend
participant DB
participant VectorStore
participant LLM
User->>Frontend: 发送问题
Frontend->>Backend: 发送请求
rect rgb(200, 220, 250)
Note over Backend: 消息存储阶段
Backend->>DB: 存储用户问题(user类型)
Backend->>DB: 创建空assistant记录
end
rect rgb(200, 250, 220)
Note over Backend: 知识库准备阶段
Backend->>VectorStore: 初始化向量存储
Backend->>VectorStore: 获取相关知识库
end
rect rgb(250, 220, 200)
Note over Backend: RAG处理阶段
Backend->>VectorStore: 执行相似度检索
VectorStore-->>Backend: 返回相关文档
Backend->>LLM: 发送上下文化问题请求
LLM-->>Backend: 返回重构后的问题
Backend->>LLM: 发送最终生成请求
LLM-->>Backend: 流式返回答案
end
Backend-->>Frontend: 流式返回(context + answer)
rect rgb(220, 220, 250)
Note over Frontend: 响应处理阶段
Frontend->>Frontend: 解析context(base64)
Frontend->>Frontend: 解析引用标记
Frontend->>Frontend: 渲染回答和引用
end
Frontend-->>User: 展示答案和引用
```
1. **消息存储**
- 将用户的提问内容保存为 user 类型的消息记录
- 创建一个空的 assistant 类型消息记录作为占位符
2. **知识库准备**
- 根据传入的 knowledge_base_ids 获取相关知识库
- 初始化 OpenAI Embeddings
- 为每个知识库创建对应的向量存储 (Vector Store)
3. **检索增强生成 (RAG) 处理**
- 使用向量存储创建检索器 (Retriever)
- 构建两个关键提示词模板:
- `contextualize_q_prompt`: 用于理解聊天历史上下文,重新构造独立的问题
- `qa_prompt`: 用于生成最终答案,包含引用格式要求和语言适配等规则
4. **响应生成**
- 处理历史聊天记录,构建对话上下文
- 使用流式响应逐步生成内容
- 响应内容包含两部分:
- 相关文档上下文 (base64 编码)
- LLM 生成的回答内容
5. **结果处理**
- 实时返回生成的内容片段
- 更新数据库中的 assistant 消息记录
- 完整响应格式: `{context_base64}__LLM_RESPONSE__{answer}`
6. **异常处理**
- 捕获并记录生成过程中的错误
- 更新错误信息到消息记录
- 确保数据库会话正确关闭
前端接收到后台返回的 stream 返回以后,可开始解析这个 stream 后, 除了正常和其他 QA 聊天工具一样, 这里还多了一个引用信息, 所以需要解析出引用信息, 然后展示在页面上。
他是怎么运作的呢?这里前端会通过 `__LLM_RESPONSE__` 这个分隔符来解析, 前面一部分是 RAG 检索出来的 context 信息(base64 编码, 可以理解为是检索出来的切片的数组),后面是 LLM 按照 context 回来的信息, 然后通过 `[[citation:1]]` 这个格式来解析出引用信息。
```mermaid
flowchart TD
A[接收Stream响应] --> B{解析响应}
B -->|分割| C[Context部分]
B -->|分割| D[Answer部分]
C --> E[Base64解码]
E --> F[解析引用信息]
D --> G[解析引用标记]
G --> H[[citation:1]]
F --> I[准备引用数据]
H --> I
I --> J[渲染回答内容]
J --> K[显示引用弹窗]
```
代码可查询:
- Chat 页面:`frontend/src/app/dashboard/chat/[id]/page.tsx`
- 引用信息展示:`frontend/src/components/chat/answer.tsx`
```js
const CitationLink = useMemo(
() =>
(
props: ClassAttributes<HTMLAnchorElement> &
AnchorHTMLAttributes<HTMLAnchorElement>
) => {
const citationId = props.href?.match(/^(\d+)$/)?.[1];
const citation = citationId
? citations[parseInt(citationId) - 1]
: null;
if (!citation) {
return <a>[{props.href}]</a>;
}
const citationInfo =
citationInfoMap[
`${citation.metadata.kb_id}-${citation.metadata.document_id}`
];
return (
<Popover>
<PopoverTrigger asChild>
<a
{...props}
href="#"
role="button"
className="inline-flex items-center gap-1 px-1.5 py-0.5 text-xs font-medium text-blue-600 bg-blue-50 rounded hover:bg-blue-100 transition-colors relative"
>
<span className="absolute -top-3 -right-1">[{props.href}]</span>
</a>
</PopoverTrigger>
<PopoverContent
side="top"
align="start"
className="max-w-2xl w-[calc(100vw-100px)] p-4 rounded-lg shadow-lg"
>
<div className="text-sm space-y-3">
{citationInfo && (
<div className="flex items-center gap-2 text-xs font-medium text-gray-700 bg-gray-50 p-2 rounded">
<div className="w-5 h-5 flex items-center justify-center">
<FileIcon
extension={
citationInfo.document.file_name.split(".").pop() || ""
}
color="#E2E8F0"
labelColor="#94A3B8"
/>
</div>
<span className="truncate">
{citationInfo.knowledge_base.name} /{" "}
{citationInfo.document.file_name}
</span>
</div>
)}
<Divider />
<p className="text-gray-700 leading-relaxed">{citation.text}</p>
<Divider />
{Object.keys(citation.metadata).length > 0 && (
<div className="text-xs text-gray-500 bg-gray-50 p-2 rounded">
<div className="font-medium mb-2">Debug Info:</div>
<div className="space-y-1">
{Object.entries(citation.metadata).map(([key, value]) => (
<div key={key} className="flex">
<span className="font-medium min-w-[100px]">
{key}:
</span>
<span className="text-gray-600">{String(value)}</span>
</div>
))}
</div>
</div>
)}
</div>
</PopoverContent>
</Popover>
);
},
[citations, citationInfoMap]
);
```
当用户点击引用信息的时候, 会弹出一个弹窗, 展示引用详情, 包括知识库名称, 文件名称, 以及引用内容。
## 6. 拓展:根据需求定制你的 RAG
### 6.1 不同的向量数据库或大语言模型
目前已经通过 Factory 模式, 支持了不同的向量数据库、不同的大模型,例如 Ollama 也有同学在支持, 可以参考 `backend/app/services/vector_store/factory.py` 这个文件。
### 6.2 Chunk 分割策略与 Embedding 模型的调整
不同的 Embedding 模型对多语言支持和文本类型有不同的特点:
- **多语言支持**:
- `text-embedding-ada-002`:支持多种语言,但对中文等亚洲语言的支持相对较弱
- `bge-large-zh`:对中文有很好的支持
- `multilingual-e5-large`:对多语言都有较好的支持
- **文本类型适用性**:
- 代码文本:建议使用专门的代码 Embedding 模型,如 `CodeBERT`
- 通用文本:可以使用 `text-embedding-ada-002` 或 `bge-large-zh`
- 专业领域文本:建议使用该领域的专门模型
选择合适的 Embedding 模型可以显著提升检索效果。
## 7. 总结与下一步
整个项目到这里就结束了, 整个项目中, 我们通过一个完整的工程实现示例, 来理解 RAG 在知识库问答中的具体应用流程。
如果你需要 Ask Me Anything, 可以通过 [Issue](https://github.com/rag-web-ui/rag-web-ui/issues) 来联系我。
你可以深入研究的方向
- 多路召回(多个数据库或不同关注点检索结果的合并)
- RAG + 交叉编码 re-ranking 提高回答精度
- 长文本多轮对话(上下文记忆 / Conversation Memory)
- [LangChain 官网](https://python.langchain.com/)
- [ChromaDB](https://docs.trychroma.com/)
- [OpenAI Embeddings 介绍](https://platform.openai.com/docs/guides/embeddings)
## 8. 处理网络错误和无法访问的服务器
在注册账户时,可能会遇到网络错误或服务器无法访问的问题。以下是一些处理这些问题的方法:
### 8.1 更新依赖项
确保 `backend/requirements.txt` 文件中指定的依赖项版本是可用的。例如,将 `langchain-deepseek` 的版本更新为 `==0.1.1`:
```plaintext
langchain-deepseek==0.1.1
```
### 8.2 错误处理
在 `backend/app/api/api_v1/auth.py` 文件中添加错误处理,以捕获注册过程中可能出现的网络错误和无法访问的服务器问题:
```python
from requests.exceptions import RequestException
@router.post("/register", response_model=UserResponse)
def register(*, db: Session = Depends(get_db), user_in: UserCreate) -> Any:
"""
Register a new user.
"""
try:
# Check if user with this email exists
user = db.query(User).filter(User.email == user_in.email).first()
if user:
raise HTTPException(
status_code=400,
detail="A user with this email already exists.",
)
# Check if user with this username exists
user = db.query(User).filter(User.username == user_in.username).first()
if user:
raise HTTPException(
status_code=400,
detail="A user with this username already exists.",
)
# Create new user
user = User(
email=user_in.email,
username=user_in.username,
hashed_password=security.get_password_hash(user_in.password),
)
db.add(user)
db.commit()
db.refresh(user)
return user
except RequestException as e:
raise HTTPException(
status_code=503,
detail="Network error or server is unreachable. Please try again later.",
) from e
```
### 8.3 重试机制
在 `backend/Dockerfile` 和 `docker-compose.yml` 文件中添加重试机制,以处理构建过程中可能出现的网络错误:
#### `backend/Dockerfile`
```dockerfile
# Install Python packages with retry mechanism
RUN pip install --no-cache-dir -r requirements.txt || \
(echo "Retrying in 5 seconds..." && sleep 5 && pip install --no-cache-dir -r requirements.txt) || \
(echo "Retrying in 10 seconds..." && sleep 10 && pip install --no-cache-dir -r requirements.txt)
```
#### `docker-compose.yml`
```yaml
services:
backend:
build: ./backend
restart: on-failure
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
```
通过以上方法,尝试处理注册账户时可能遇到的网络错误和无法访问的服务器问题。 | {
"source": "rag-web-ui/rag-web-ui",
"title": "docs/tutorial/README.md",
"url": "https://github.com/rag-web-ui/rag-web-ui/blob/main/docs/tutorial/README.md",
"date": "2025-01-14T01:48:30",
"stars": 2035,
"description": "RAG Web UI is an intelligent dialogue system based on RAG (Retrieval-Augmented Generation) technology.",
"file_size": 23511
} |
<div align='center'>
<h1>Emu3: Next-Token Prediction is All You Need</h1h1>
<h3></h3>
[Emu3 Team, BAAI](https://www.baai.ac.cn/english.html)
| [Project Page](https://emu.baai.ac.cn) | [Paper](https://arxiv.org/pdf/2409.18869) | [🤗HF Models](https://huggingface.co/collections/BAAI/emu3-66f4e64f70850ff358a2e60f) | [Modelscope](https://modelscope.cn/collections/Emu3-9eacc8668b1043) | [Demo](https://huggingface.co/spaces/BAAI/Emu3) |
</div>
<div align='center'>
<img src="./assets/arch.png" class="interpolation-image" alt="arch." height="80%" width="70%" />
</div>
We introduce **Emu3**, a new suite of state-of-the-art multimodal models trained solely with **<i>next-token prediction</i>**! By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences.
### Emu3 excels in both generation and perception
**Emu3** outperforms several well-established task-specific models in both generation and perception tasks, surpassing flagship open models such as SDXL, LLaVA-1.6 and OpenSora-1.2, while eliminating the need for diffusion or compositional architectures.
<div align='center'>
<img src="./assets/comparison.png" class="interpolation-image" alt="comparison." height="80%" width="80%" />
</div>
### Highlights
- **Emu3** is capable of generating high-quality images following the text input, by simply predicting the next vision token. The model naturally supports flexible resolutions and styles.
- **Emu3** shows strong vision-language understanding capabilities to see the physical world and provides coherent text responses. Notably, this capability is achieved without depending on a CLIP and a pretrained LLM.
- **Emu3** simply generates a video causally by predicting the next token in a video sequence, unlike the video diffusion model as in Sora. With a video in context, Emu3 can also naturally extend the video and predict what will happen next.
## News
- 2024.10 We release the image pretrained model **[Emu3-Stage1](https://huggingface.co/BAAI/Emu3-Stage1)** and the sft scripts. The model supports image captioning and can generate images at a resolution of 512x512. You can use our training scripts for further instruction tuning for more image generation and perception tasks. 🔥🔥🔥
- 2024.09 We relase **[Emu3-Chat](https://huggingface.co/BAAI/Emu3-Chat)** and **[Emu3-Gen](https://huggingface.co/BAAI/Emu3-Gen)** which are post training models separately for vision-language understanding and vision generation.
- 2024.09 We introduce Emu3, a new suite of state-of-the-art multimodal models trained solely with next-token prediction.
### TODO
- [X] Release model weights of tokenizer, Emu3-Chat and Emu3-Gen
- [X] Release the inference code.
- [ ] Release the evaluation code.
- [X] Release training scripts for sft.
- [ ] Release training scripts for pretrain and dpo.
### Setup
Clone this repository and install required packages:
```shell
git clone https://github.com/baaivision/Emu3
cd Emu3
pip install -r requirements.txt
```
### Model Weights
| Model name | HF Weight | Modelscope | Wisemodel |
| ------------------------ | -------------------------------------------------------------- | ------------------------------------------------------------------------- | ----------------------------------------------------------------------- |
| **Emu3-Stage1** | [🤗 HF link](https://huggingface.co/BAAI/Emu3-Stage1) | [Modelscope link](https://modelscope.cn/models/BAAI/Emu3-Stage1) | |
| **Emu3-Chat** | [🤗 HF link](https://huggingface.co/BAAI/Emu3-Chat) | [Modelscope link](https://modelscope.cn/models/BAAI/Emu3-Chat) | [Wisemodel link](https://wisemodel.cn/models/BAAI/Emu3-Chat) |
| **Emu3-Gen** | [🤗 HF link](https://huggingface.co/BAAI/Emu3-Gen) | [Modelscope link](https://modelscope.cn/models/BAAI/Emu3-Gen) | [Wisemodel link](https://wisemodel.cn/models/BAAI/Emu3-Gen) |
| **Emu3-VisionTokenizer** | [🤗 HF link](https://huggingface.co/BAAI/Emu3-VisionTokenizer) | [Modelscope link](https://modelscope.cn/models/BAAI/Emu3-VisionTokenizer) | [Wisemodel link](https://wisemodel.cn/models/BAAI/Emu3-VisionTokenizer) |
### Quickstart
#### Use 🤗Transformers to run Emu3-Gen/Stage1 for image generation
```python
from PIL import Image
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor, AutoModelForCausalLM
from transformers.generation.configuration_utils import GenerationConfig
from transformers.generation import LogitsProcessorList, PrefixConstrainedLogitsProcessor, UnbatchedClassifierFreeGuidanceLogitsProcessor
import torch
from emu3.mllm.processing_emu3 import Emu3Processor
# model path
EMU_HUB = "BAAI/Emu3-Gen"
VQ_HUB = "BAAI/Emu3-VisionTokenizer"
# prepare model and processor
model = AutoModelForCausalLM.from_pretrained(
EMU_HUB,
device_map="cuda:0",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(EMU_HUB, trust_remote_code=True, padding_side="left")
image_processor = AutoImageProcessor.from_pretrained(VQ_HUB, trust_remote_code=True)
image_tokenizer = AutoModel.from_pretrained(VQ_HUB, device_map="cuda:0", trust_remote_code=True).eval()
processor = Emu3Processor(image_processor, image_tokenizer, tokenizer)
# prepare input
POSITIVE_PROMPT = " masterpiece, film grained, best quality."
NEGATIVE_PROMPT = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry."
classifier_free_guidance = 3.0
prompt = "a portrait of young girl."
prompt += POSITIVE_PROMPT
kwargs = dict(
mode='G',
ratio="1:1",
image_area=model.config.image_area,
return_tensors="pt",
padding="longest",
)
pos_inputs = processor(text=prompt, **kwargs)
neg_inputs = processor(text=NEGATIVE_PROMPT, **kwargs)
# prepare hyper parameters
GENERATION_CONFIG = GenerationConfig(
use_cache=True,
eos_token_id=model.config.eos_token_id,
pad_token_id=model.config.pad_token_id,
max_new_tokens=40960,
do_sample=True,
top_k=2048,
)
h = pos_inputs.image_size[:, 0]
w = pos_inputs.image_size[:, 1]
constrained_fn = processor.build_prefix_constrained_fn(h, w)
logits_processor = LogitsProcessorList([
UnbatchedClassifierFreeGuidanceLogitsProcessor(
classifier_free_guidance,
model,
unconditional_ids=neg_inputs.input_ids.to("cuda:0"),
),
PrefixConstrainedLogitsProcessor(
constrained_fn ,
num_beams=1,
),
])
# generate
outputs = model.generate(
pos_inputs.input_ids.to("cuda:0"),
GENERATION_CONFIG,
logits_processor=logits_processor,
attention_mask=pos_inputs.attention_mask.to("cuda:0"),
)
mm_list = processor.decode(outputs[0])
for idx, im in enumerate(mm_list):
if not isinstance(im, Image.Image):
continue
im.save(f"result_{idx}.png")
```
#### Use 🤗Transformers to run Emu3-Chat/Stage1 for vision-language understanding
```python
from PIL import Image
from transformers import AutoTokenizer, AutoModel, AutoImageProcessor, AutoModelForCausalLM
from transformers.generation.configuration_utils import GenerationConfig
import torch
from emu3.mllm.processing_emu3 import Emu3Processor
# model path
EMU_HUB = "BAAI/Emu3-Chat"
VQ_HUB = "BAAI/Emu3-VisionTokenier"
# prepare model and processor
model = AutoModelForCausalLM.from_pretrained(
EMU_HUB,
device_map="cuda:0",
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
trust_remote_code=True,
)
# used for Emu3-Chat
tokenizer = AutoTokenizer.from_pretrained(EMU_HUB, trust_remote_code=True, padding_side="left")
# used for Emu3-Stage1
# tokenizer = AutoTokenizer.from_pretrained(
# EMU_HUB,
# trust_remote_code=True,
# chat_template="{image_prompt}{text_prompt}",
# padding_side="left",
# )
image_processor = AutoImageProcessor.from_pretrained(VQ_HUB, trust_remote_code=True)
image_tokenizer = AutoModel.from_pretrained(VQ_HUB, device_map="cuda:0", trust_remote_code=True).eval()
processor = Emu3Processor(image_processor, image_tokenizer, tokenizer)
# prepare input
text = "Please describe the image"
image = Image.open("assets/demo.png")
inputs = processor(
text=text,
image=image,
mode='U',
return_tensors="pt",
padding="longest",
)
# prepare hyper parameters
GENERATION_CONFIG = GenerationConfig(
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=1024,
)
# generate
outputs = model.generate(
inputs.input_ids.to("cuda:0"),
GENERATION_CONFIG,
attention_mask=inputs.attention_mask.to("cuda:0"),
)
outputs = outputs[:, inputs.input_ids.shape[-1]:]
print(processor.batch_decode(outputs, skip_special_tokens=True)[0])
```
#### Use 🤗Transformers to run Emu3-VisionTokenzier for vision encoding and decoding
```python
import os
import os.path as osp
from PIL import Image
import torch
from transformers import AutoModel, AutoImageProcessor
MODEL_HUB = "BAAI/Emu3-VisionTokenizer"
model = AutoModel.from_pretrained(MODEL_HUB, trust_remote_code=True).eval().cuda()
processor = AutoImageProcessor.from_pretrained(MODEL_HUB, trust_remote_code=True)
# TODO: you need to modify the path here
VIDEO_FRAMES_PATH = "YOUR_VIDEO_FRAMES_PATH"
video = os.listdir(VIDEO_FRAMES_PATH)
video.sort()
video = [Image.open(osp.join(VIDEO_FRAMES_PATH, v)) for v in video]
images = processor(video, return_tensors="pt")["pixel_values"]
images = images.unsqueeze(0).cuda()
# image autoencode
image = images[:, 0]
print(image.shape)
with torch.no_grad():
# encode
codes = model.encode(image)
# decode
recon = model.decode(codes)
recon = recon.view(-1, *recon.shape[2:])
recon_image = processor.postprocess(recon)["pixel_values"][0]
recon_image.save("recon_image.png")
# video autoencode
images = images.view(
-1,
model.config.temporal_downsample_factor,
*images.shape[2:],
)
print(images.shape)
with torch.no_grad():
# encode
codes = model.encode(images)
# decode
recon = model.decode(codes)
recon = recon.view(-1, *recon.shape[2:])
recon_images = processor.postprocess(recon)["pixel_values"]
for idx, im in enumerate(recon_images):
im.save(f"recon_video_{idx}.png")
```
## Acknowledgement
We thank the great work from [Emu Series](https://github.com/baaivision/Emu), [QWen2-VL](https://github.com/QwenLM/Qwen2-VL) and [MoVQGAN](https://github.com/ai-forever/MoVQGAN)
## Citation
If you find Emu3 useful for your research and applications, please consider starring this repository and citing:
```
@article{wang2024emu3,
title={Emu3: Next-Token Prediction is All You Need},
author={Wang, Xinlong and Zhang, Xiaosong and Luo, Zhengxiong and Sun, Quan and Cui, Yufeng and Wang, Jinsheng and Zhang, Fan and Wang, Yueze and Li, Zhen and Yu, Qiying and others},
journal={arXiv preprint arXiv:2409.18869},
year={2024}
}
``` | {
"source": "baaivision/Emu3",
"title": "README.md",
"url": "https://github.com/baaivision/Emu3/blob/main/README.md",
"date": "2024-09-26T11:03:22",
"stars": 2010,
"description": "Next-Token Prediction is All You Need",
"file_size": 11326
} |
# Smol Models 🤏
Welcome to Smol Models, a family of efficient and lightweight AI models from Hugging Face. Our mission is to create powerful yet compact models, for text and vision, that can run effectively on-device while maintaining strong performance.
**News 📰**
- **Introducing [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath), the best public math pretraining dataset 🚀**
- Added continual pretraining code for Llama 3.2 3B on FineMath & FineWeb-Edu with `nanotron`
## 💬 SmolLM2 (Language Model)
[SmolLM2](https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9) is our family of compact language models available in three sizes:
- **SmolLM2-135M**: Ultra-lightweight model for basic text tasks
- **SmolLM2-360M**: Balanced model for general use
- **SmolLM2-1.7B**: Our most capable language model, available at **🤏 SmolLM2-1.7B-Instruct** [here](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
All models have instruction-tuned versions optimized for assistant-like interactions. Find them in our [SmolLM2 collection](https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9).
## 👁️ SmolVLM (Vision Language Model)
[SmolVLM](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct) is our compact multimodal model that can:
- Process both images and text and perform tasks like visual QA, image description, and visual storytelling
- Handle multiple images in a single conversation
- Run efficiently on-device
## Repository Structure
```
smollm/
├── text/ # SmolLM2 related code and resources
├── vision/ # SmolVLM related code and resources
└── tools/ # Shared utilities and inference tools
├── smol_tools/ # Lightweight AI-powered tools
├── smollm_local_inference/
└── smolvlm_local_inference/
```
## Getting Started
### SmolLM2
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint)
messages = [{"role": "user", "content": "Write a 100-word article on 'Benefits of Open-Source in AI research"}]
input_text = tokenizer.apply_chat_template(messages, tokenize=False)
```
### SmolVLM
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
model = AutoModelForVision2Seq.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What's in this image?"}
]
}
]
```
## Ecosystem
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/RvHjdlRT5gGQt5mJuhXH9.png" width="700"/>
</div>
## Resources
### Documentation
- [SmolLM2 Documentation](text/README.md)
- [SmolVLM Documentation](vision/README.md)
- [Local Inference Guide](tools/README.md)
### Pretrained Models
- [SmolLM2 Models Collection](https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9)
- [SmolVLM Model](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct)
### Datasets
- [SmolTalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) - Our instruction-tuning dataset
- [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) - Mathematics pretraining dataset
- [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) - Educational content pretraining dataset | {
"source": "huggingface/smollm",
"title": "README.md",
"url": "https://github.com/huggingface/smollm/blob/main/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 3599
} |
# SmolLM2

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. You can find our most capable model **🤏 SmolLM2-1.7B-Instruct** [here](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct).
In this section you can find everything related to the training of SmolLM2. This includes pretraining and finetuning code, data curation as well as evaluation. We also recommend [SmolCourse](https://github.com/huggingface/smol-course) for more resources on smol models and how to leverage SmolLM2.
**News 📰**
- **Introducing [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath), the best public math pre-training dataset 🚀**
- We added the code to do continual pre-training of Llama 3.2 3B on FineMath & FineWeb-Edu with `nanotron` at [pretraining/continual-pretraining](./pretraining/continual-pretraining)
## Table of Contents
1. [Usage](#usage)
- [Transformers](#transformers)
- [Chat in TRL](#chat-in-trl)
- [Local inference](#local-inference)
- [Smol-tools](#smol-tools)
2. [Pretraining](#pretraining)
3. [Finetuning](#finetuning)
4. [Evaluation](#evaluation)
5. [Data](#data)
## Usage
Our most powerful model is `SmolLM2-1.7B-Instruct`, which you can use as an assistant with `transformers`, `trl`, or using quantized versions with tools like `llama.cpp`, `MLX`, and `transformers.js`. For lighter applications, you can also use the smaller models `SmolLM2-360M` and`SmolLM2-135M`, which are suitable for on-device usage and can be integrated similarly.
All available in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm2-6723884218bcda64b34d7db9).
### Transformers
```bash
pip install transformers
```
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
messages = [{"role": "user", "content": "Write a 100-word article on 'Benefits of Open-Source in AI research"}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))
```
### Chat in TRL
You can also use the TRL CLI to chat with the model from the terminal:
```bash
pip install trl
trl chat --model_name_or_path HuggingFaceTB/SmolLM2-1.7B-Instruct --device cpu
```
You can find more details on how to leverage the model for use cases such as text summarization, text rewriting and function calling in the model card: https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
### Local inference
You can use the models locally with frameworks like `llama.cpp`, `MLX`, `MLC` and `transformers.js`. You can find the instructions to run SmolLM2 with these frameworks at [local-inference](../tools/smollm_local_inference/README.md).
### Smol-tools
A collection of lightweight AI-powered tools built with LLaMA.cpp and small language models. These tools are designed to run locally on your machine without requiring expensive GPU resources.
Further instructions on how to use the tools can be found in the [smol-tools README](../tools/smol_tools/README.md).
## Pretraining
You can find scripts for launching pretraining with [nanotron](https://github.com/huggingface/nanotron/) under [pretraining](./pretraining/README.md), we share the exact configs for training SmolLM1 and will upload SmolLM2's configs soon. Additionally we provide code for continual-pretraining on SmolLM2 and Llama3.2 3B using nanotron. The SmolLM2 nanotron checkpoints are available [on the hub](https://huggingface.co/HuggingFaceTB/SmolLM2-nanotron-ckpt) with their optimizer states.
## Finetuning
You can find an example script to finetune SmolLM2 using `TRL` and `PEFT` in the `finetuning` folder. We also link to our post-training scripts for SmolLM2 using the alignment handbook. We also recommend [SmolCourse](https://github.com/huggingface/smol-course) for more resources on finetuning smol models and SmolLM2.
## Data
We also provide the code for curating the SmolLM datasets in [data](./data/README.md), this includes FineWeb-Edu, FineMath and the [distilabel](https://github.com/argilla-io/distilabel) pipelines for SmolTalk. | {
"source": "huggingface/smollm",
"title": "text/README.md",
"url": "https://github.com/huggingface/smollm/blob/main/text/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 4787
} |
# Tools for local inference
Here you can find tools and demos for running SmolLM2 and SmolVLM locally, leveraing libraries such as llama.cpp, MLX, MLC and Transformers.js. | {
"source": "huggingface/smollm",
"title": "tools/README.md",
"url": "https://github.com/huggingface/smollm/blob/main/tools/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 172
} |
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/SmolVLM.png" width="800" height="auto" alt="Image description">
# SmolVLM
[SmolVLM](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct) is a compact open multimodal model that accepts arbitrary sequences of image and text inputs to produce text outputs. It uses SmolLM2-1.7B-Instruct as a language backbone and is designed for efficiency. SmolVLM can answer questions about images, describe visual content, create stories grounded on multiple images, or function as a pure language model without visual inputs. Its lightweight architecture makes it suitable for on-device applications while maintaining strong performance on multimodal tasks.
More details in this blog post: https://huggingface.co/blog/smolvlm
In this section you can find everything related to the training of our Vision Language Models series: SmolVLM. This includes pretraining and finetuning code, as well as evaluation (TODO).
# Table of Contents
2. [Usage](#usage)
3. [Inference with transformers](#inference-with-transformers)
4. [Inference with mlx-vlm](#inference-with-mlx-vlm)
5. [Video Inference](#video-inference)
## Usage
SmolVLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
To fine-tune SmolVLM on a specific task, you can follow this [fine-tuning tutorial](finetuning/Smol_VLM_FT.ipynb)
## Inference with transformers
You can use transformers to load, infer and fine-tune SmolVLM.
```python
import torch
from PIL import Image
from transformers import AutoProcessor, AutoModelForVision2Seq
from transformers.image_utils import load_image
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# Load images
image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
# Initialize processor and model
processor = AutoProcessor.from_pretrained("HuggingFaceTB/SmolVLM-Instruct")
model = AutoModelForVision2Seq.from_pretrained(
"HuggingFaceTB/SmolVLM-Instruct",
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
).to(DEVICE)
# Create input messages
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "image"},
{"type": "text", "text": "Can you describe the two images?"}
]
},
]
# Prepare inputs
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
inputs = inputs.to(DEVICE)
# Generate outputs
generated_ids = model.generate(**inputs, max_new_tokens=500)
generated_texts = processor.batch_decode(
generated_ids,
skip_special_tokens=True,
)
print(generated_texts[0])
"""
Assistant: The first image shows a green statue of the Statue of Liberty standing on a stone pedestal in front of a body of water.
The statue is holding a torch in its right hand and a tablet in its left hand. The water is calm and there are no boats or other objects visible.
The sky is clear and there are no clouds. The second image shows a bee on a pink flower.
The bee is black and yellow and is collecting pollen from the flower. The flower is surrounded by green leaves.
"""
```
## Inference with mlx-vlm
You can also get fast generations for SmolVLM locally with mlx-vlm:
```bash
pip install -U mlx-vlm
python -m mlx_vlm.chat_ui --model mlx-community/SmolVLM-Instruct-8bit
```
## Video inference
Given SmolVLM's long context and the possibility of tweaking the internal frame resizing of the model, we explored its suitability as an accessible option for basic video analysis tasks, particularly when computational resources are limited.
In our evaluation of SmolVLM's video understanding capabilities, we implemented a straightforward video processing pipeline code in [SmolVLM_video_inference.py](../tools/smolvlm_local_inference/SmolVLM_video_inference.py), extracting up to 50 evenly sampled frames from each video while avoiding internal frame resizing. This simple approach yielded surprisingly competitive results on the CinePile benchmark, with a score of 27.14%, a performance that positions the model between InterVL2 (2B) and Video LlaVa (7B).
## Training codebase
The training codebase is available in the [m4](m4) and [experiments](experiments) folders. This codebase is based on an internal codebase from HuggingFace which was in development since 2022. Some of the biggest contributors are:
- [VictorSanh](https://github.com/VictorSanh)
- [HugoLaurencon](https://github.com/HugoLaurencon)
- [SaulLu](https://github.com/SaulLu)
- [leot13](https://github.com/leot13)
- [stas00](https://github.com/stas00)
- [apsdehal](https://github.com/apsdehal)
- [thomasw21](https://github.com/thomasw21)
- [siddk](https://github.com/siddk) | {
"source": "huggingface/smollm",
"title": "vision/README.md",
"url": "https://github.com/huggingface/smollm/blob/main/vision/README.md",
"date": "2024-11-04T13:01:54",
"stars": 1945,
"description": "Everything about the SmolLM2 and SmolVLM family of models ",
"file_size": 5210
} |
Subsets and Splits