text
stringlengths
55
456k
metadata
dict
# 🚀 Cursor 免费试用重置工具 <div align="center"> [![Release](https://img.shields.io/github/v/release/yuaotian/go-cursor-help?style=flat-square&logo=github&color=blue)](https://github.com/yuaotian/go-cursor-help/releases/latest) [![License](https://img.shields.io/badge/license-MIT-blue.svg?style=flat-square&logo=bookstack)](https://github.com/yuaotian/go-cursor-help/blob/master/LICENSE) [![Stars](https://img.shields.io/github/stars/yuaotian/go-cursor-help?style=flat-square&logo=github)](https://github.com/yuaotian/go-cursor-help/stargazers) [🌟 English](README.md) | [🌏 中文](README_CN.md) <img src="https://ai-cursor.com/wp-content/uploads/2024/09/logo-cursor-ai-png.webp" alt="Cursor Logo" width="120"/> </div> > ⚠️ **重要提示** > > 本工具当前支持版本: > - ✅ Cursor v0.44.11 及以下版本 > - ✅ Windows: 最新的 0.45.x 版本(已支持) > - ✅ Mac/Linux: 最新的 0.45.x 版本(已支持,欢迎测试并反馈问题) > > 使用前请确认您的 Cursor 版本。 <details open> <summary><b>📦 版本历史与下载</b></summary> <div class="version-card" style="background: linear-gradient(135deg, #6e8efb, #a777e3); border-radius: 8px; padding: 15px; margin: 10px 0; color: white;"> ### 🌟 最新版本 - v0.45.11 (2025-02-07) - 最新发布 - v0.44.11 (2025-01-03) - 最稳定版本 [查看完整版本历史](CursorHistoryDown.md) </div> ### 📥 直接下载链接 **v0.44.11 (推荐稳定版)** - Windows: [官方下载](https://downloader.cursor.sh/builds/250103fqxdt5u9z/windows/nsis/x64) | [镜像下载](https://download.todesktop.com/230313mzl4w4u92/Cursor%20Setup%200.44.11%20-%20Build%20250103fqxdt5u9z-x64.exe) - Mac: [Apple Silicon](https://dl.todesktop.com/230313mzl4w4u92/versions/0.44.11/mac/zip/arm64) </details> > ⚠️ **MAC地址修改警告** > > Mac用户请注意: 本脚本包含MAC地址修改功能,将会: > - 修改您的网络接口MAC地址 > - 在修改前备份原始MAC地址 > - 此修改可能会暂时影响网络连接 > - 执行过程中可以选择跳过此步骤 ### 🚀 系统支持 <table> <tr> <td> **Windows** ✅ - x64 & x86 </td> <td> **macOS** ✅ - Intel & M-series </td> <td> **Linux** ✅ - x64 & ARM64 </td> </tr> </table> ### 🚀 一键解决方案 <details open> <summary><b>国内用户(推荐)</b></summary> **macOS** ```bash curl -fsSL https://aizaozao.com/accelerate.php/https://raw.githubusercontent.com/yuaotian/go-cursor-help/refs/heads/master/scripts/run/cursor_mac_id_modifier.sh | sudo bash ``` **Linux** ```bash curl -fsSL https://aizaozao.com/accelerate.php/https://raw.githubusercontent.com/yuaotian/go-cursor-help/refs/heads/master/scripts/run/cursor_linux_id_modifier.sh | sudo bash ``` **Windows** ```powershell irm https://aizaozao.com/accelerate.php/https://raw.githubusercontent.com/yuaotian/go-cursor-help/refs/heads/master/scripts/run/cursor_win_id_modifier.ps1 | iex ``` <div align="center"> <img src="img/run_success.png" alt="运行成功" width="600"/> </div> </details> <details open> <summary><b>Windows 管理员终端运行和手动安装</b></summary> #### Windows 系统打开管理员终端的方法: ##### 方法一:使用 Win + X 快捷键 ```md 1. 按下 Win + X 组合键 2. 在弹出的菜单中选择以下任一选项: - "Windows PowerShell (管理员)" - "Windows Terminal (管理员)" - "终端(管理员)" (具体选项因Windows版本而异) ``` ##### 方法二:使用 Win + R 运行命令 ```md 1. 按下 Win + R 组合键 2. 在运行框中输入 powershell 或 pwsh 3. 按 Ctrl + Shift + Enter 以管理员身份运行 或在打开的窗口中输入: Start-Process pwsh -Verb RunAs 4. 在管理员终端中输入以下重置脚本: irm https://aizaozao.com/accelerate.php/https://raw.githubusercontent.com/yuaotian/go-cursor-help/refs/heads/master/scripts/run/cursor_win_id_modifier.ps1 | iex ``` ##### 方法三:通过搜索启动 >![搜索 PowerShell](img/pwsh_1.png) > >在搜索框中输入 pwsh,右键选择"以管理员身份运行" >![管理员运行](img/pwsh_2.png) 在管理员终端中输入重置脚本: ```powershell irm https://aizaozao.com/accelerate.php/https://raw.githubusercontent.com/yuaotian/go-cursor-help/refs/heads/master/scripts/run/cursor_win_id_modifier.ps1 | iex ``` ### 🔧 PowerShell 安装指南 如果您的系统没有安装 PowerShell,可以通过以下方法安装: #### 方法一:使用 Winget 安装(推荐) 1. 打开命令提示符或 PowerShell 2. 运行以下命令: ```powershell winget install --id Microsoft.PowerShell --source winget ``` #### 方法二:手动下载安装 1. 下载对应系统的安装包: - [PowerShell-7.4.6-win-x64.msi](https://github.com/PowerShell/PowerShell/releases/download/v7.4.6/PowerShell-7.4.6-win-x64.msi) (64位系统) - [PowerShell-7.4.6-win-x86.msi](https://github.com/PowerShell/PowerShell/releases/download/v7.4.6/PowerShell-7.4.6-win-x86.msi) (32位系统) - [PowerShell-7.4.6-win-arm64.msi](https://github.com/PowerShell/PowerShell/releases/download/v7.4.6/PowerShell-7.4.6-win-arm64.msi) (ARM64系统) 2. 双击下载的安装包,按提示完成安装 > 💡 如果仍然遇到问题,可以参考 [Microsoft 官方安装指南](https://learn.microsoft.com/zh-cn/powershell/scripting/install/installing-powershell-on-windows) </details> #### Windows 安装特性: - 🔍 自动检测并使用 PowerShell 7(如果可用) - 🛡️ 通过 UAC 提示请求管理员权限 - 📝 如果没有 PS7 则使用 Windows PowerShell - 💡 如果提权失败会提供手动说明 完成后,脚本将: 1. ✨ 自动安装工具 2. 🔄 立即重置 Cursor 试用期 ### 📦 手动安装 > 从 [releases](https://github.com/yuaotian/go-cursor-help/releases/latest) 下载适合您系统的文件 <details> <summary>Windows 安装包</summary> - 64 位: `cursor-id-modifier_windows_x64.exe` - 32 位: `cursor-id-modifier_windows_x86.exe` </details> <details> <summary>macOS 安装包</summary> - Intel: `cursor-id-modifier_darwin_x64_intel` - M1/M2: `cursor-id-modifier_darwin_arm64_apple_silicon` </details> <details> <summary>Linux 安装包</summary> - 64 位: `cursor-id-modifier_linux_x64` - 32 位: `cursor-id-modifier_linux_x86` - ARM64: `cursor-id-modifier_linux_arm64` </details> ### 🔧 技术细节 <details> <summary><b>注册表修改说明</b></summary> > ⚠️ **重要提示:本工具会修改系统注册表** #### 修改内容 - 路径:`计算机\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography` - 项目:`MachineGuid` #### 潜在影响 修改此注册表项可能会影响: - Windows 系统对设备的唯一标识 - 某些软件的设备识别和授权状态 - 基于硬件标识的系统功能 #### 安全措施 1. 自动备份 - 每次修改前会自动备份原始值 - 备份保存在:`%APPDATA%\Cursor\User\globalStorage\backups` - 备份文件格式:`MachineGuid.backup_YYYYMMDD_HHMMSS` 2. 手动恢复方法 - 打开注册表编辑器(regedit) - 定位到:`计算机\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography` - 右键点击 `MachineGuid` - 选择"修改" - 粘贴备份文件中的值 #### 注意事项 - 建议在修改前先确认备份文件的存在 - 如遇问题可通过备份文件恢复原始值 - 必须以管理员权限运行才能修改注册表 </details> <details> <summary><b>配置文件</b></summary> 程序修改 Cursor 的`storage.json`配置文件,位于: - Windows: `%APPDATA%\Cursor\User\globalStorage\` - macOS: `~/Library/Application Support/Cursor/User/globalStorage/` - Linux: `~/.config/Cursor/User/globalStorage/` </details> <details> <summary><b>修改字段</b></summary> 工具会生成新的唯一标识符: - `telemetry.machineId` - `telemetry.macMachineId` - `telemetry.devDeviceId` - `telemetry.sqmId` </details> <details> <summary><b>手动禁用自动更新</b></summary> Windows 用户可以手动禁用自动更新功能: 1. 关闭所有 Cursor 进程 2. 删除目录:`C:\Users\用户名\AppData\Local\cursor-updater` 3. 创建同名文件:`cursor-updater`(不带扩展名) macOS/Linux 用户可以尝试在系统中找到类似的`cursor-updater`目录进行相同操作。 </details> <details> <summary><b>安全特性</b></summary> - ✅ 安全的进程终止 - ✅ 原子文件操作 - ✅ 错误处理和恢复 </details> ## 联系方式 <div align="center"> <table> <tr> <td align="center"> <b>个人微信</b><br> <img src="img/wx_me.png" width="250" alt="作者微信"><br> <b>微信:JavaRookie666</b> </td> <td align="center"> <b>微信交流群</b><br> <img src="img/wx_group4.jpg" width="250" alt="微信群二维码"><br> <small>7天内(3月1日前)有效,群满可以加公众号关注最新动态</small> </td> <td align="center"> <b>公众号</b><br> <img src="img/wx_public_2.png" width="250" alt="微信公众号"><br> <small>获取更多AI开发资源</small> </td> <td align="center"> <b>微信赞赏</b><br> <img src="img/wx_zsm2.png" width="500" alt="微信赞赏码"><br> <small>要到饭咧?啊咧?啊咧?不给也没事~ 请随意打赏</small> </td> <td align="center"> <b>支付宝赞赏</b><br> <img src="img/alipay.png" width="500" alt="支付宝赞赏码"><br> <small>如果觉得有帮助,来包辣条犒劳一下吧~</small> </td> </tr> </table> </div> --- ### 📚 推荐阅读 - [Cursor 异常问题收集和解决方案](https://mp.weixin.qq.com/s/pnJrH7Ifx4WZvseeP1fcEA) - [AI 通用开发助手提示词指南](https://mp.weixin.qq.com/s/PRPz-qVkFJSgkuEKkTdzwg) --- ## ⭐ 项目统计 <div align="center"> [![Star History Chart](https://api.star-history.com/svg?repos=yuaotian/go-cursor-help&type=Date)](https://star-history.com/#yuaotian/go-cursor-help&Date) ![Repobeats analytics image](https://repobeats.axiom.co/api/embed/ddaa9df9a94b0029ec3fad399e1c1c4e75755477.svg "Repobeats analytics image") </div> ## 📄 许可证 <details> <summary><b>MIT 许可证</b></summary> Copyright (c) 2024 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. </details>
{ "source": "yuaotian/go-cursor-help", "title": "README_CN.md", "url": "https://github.com/yuaotian/go-cursor-help/blob/master/README_CN.md", "date": "2024-12-09T07:07:09", "stars": 10912, "description": "解决Cursor在免费订阅期间出现以下提示的问题: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.", "file_size": 8340 }
# Next.js SaaS Starter This is a starter template for building a SaaS application using **Next.js** with support for authentication, Stripe integration for payments, and a dashboard for logged-in users. **Demo: [https://next-saas-start.vercel.app/](https://next-saas-start.vercel.app/)** ## Features - Marketing landing page (`/`) with animated Terminal element - Pricing page (`/pricing`) which connects to Stripe Checkout - Dashboard pages with CRUD operations on users/teams - Basic RBAC with Owner and Member roles - Subscription management with Stripe Customer Portal - Email/password authentication with JWTs stored to cookies - Global middleware to protect logged-in routes - Local middleware to protect Server Actions or validate Zod schemas - Activity logging system for any user events ## Tech Stack - **Framework**: [Next.js](https://nextjs.org/) - **Database**: [Postgres](https://www.postgresql.org/) - **ORM**: [Drizzle](https://orm.drizzle.team/) - **Payments**: [Stripe](https://stripe.com/) - **UI Library**: [shadcn/ui](https://ui.shadcn.com/) ## Getting Started ```bash git clone https://github.com/nextjs/saas-starter cd saas-starter pnpm install ``` ## Running Locally Use the included setup script to create your `.env` file: ```bash pnpm db:setup ``` Then, run the database migrations and seed the database with a default user and team: ```bash pnpm db:migrate pnpm db:seed ``` This will create the following user and team: - User: `[email protected]` - Password: `admin123` You can, of course, create new users as well through `/sign-up`. Finally, run the Next.js development server: ```bash pnpm dev ``` Open [http://localhost:3000](http://localhost:3000) in your browser to see the app in action. Optionally, you can listen for Stripe webhooks locally through their CLI to handle subscription change events: ```bash stripe listen --forward-to localhost:3000/api/stripe/webhook ``` ## Testing Payments To test Stripe payments, use the following test card details: - Card Number: `4242 4242 4242 4242` - Expiration: Any future date - CVC: Any 3-digit number ## Going to Production When you're ready to deploy your SaaS application to production, follow these steps: ### Set up a production Stripe webhook 1. Go to the Stripe Dashboard and create a new webhook for your production environment. 2. Set the endpoint URL to your production API route (e.g., `https://yourdomain.com/api/stripe/webhook`). 3. Select the events you want to listen for (e.g., `checkout.session.completed`, `customer.subscription.updated`). ### Deploy to Vercel 1. Push your code to a GitHub repository. 2. Connect your repository to [Vercel](https://vercel.com/) and deploy it. 3. Follow the Vercel deployment process, which will guide you through setting up your project. ### Add environment variables In your Vercel project settings (or during deployment), add all the necessary environment variables. Make sure to update the values for the production environment, including: 1. `BASE_URL`: Set this to your production domain. 2. `STRIPE_SECRET_KEY`: Use your Stripe secret key for the production environment. 3. `STRIPE_WEBHOOK_SECRET`: Use the webhook secret from the production webhook you created in step 1. 4. `POSTGRES_URL`: Set this to your production database URL. 5. `AUTH_SECRET`: Set this to a random string. `openssl rand -base64 32` will generate one. ## Other Templates While this template is intentionally minimal and to be used as a learning resource, there are other paid versions in the community which are more full-featured: - https://achromatic.dev - https://shipfa.st - https://makerkit.dev
{ "source": "nextjs/saas-starter", "title": "README.md", "url": "https://github.com/nextjs/saas-starter/blob/main/README.md", "date": "2024-09-10T00:18:56", "stars": 10890, "description": "Get started quickly with Next.js, Postgres, Stripe, and shadcn/ui.", "file_size": 3644 }
<h1 style="text-align: center;">veRL: Volcano Engine Reinforcement Learning for LLM</h1> veRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs). veRL is the open-source version of **[HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2)** paper. veRL is flexible and easy to use with: - **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code. - **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks. - **Flexible device mapping**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes. - Readily integration with popular HuggingFace models veRL is fast with: - **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput. - **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases. <p align="center"> | <a href="https://verl.readthedocs.io/en/latest/index.html"><b>Documentation</b></a> | <a href="https://arxiv.org/abs/2409.19256v2"><b>Paper</b></a> | <a href="https://join.slack.com/t/verlgroup/shared_invite/zt-2w5p9o4c3-yy0x2Q56s_VlGLsJ93A6vA"><b>Slack</b></a> | <a href="https://raw.githubusercontent.com/eric-haibin-lin/verl-community/refs/heads/main/WeChat.JPG"><b>Wechat</b></a> | <!-- <a href=""><b>Slides</b></a> | --> </p> ## News - [2024/12] The team presented <a href="https://neurips.cc/Expo/Conferences/2024/workshop/100677">Post-training LLMs: From Algorithms to Infrastructure</a> at NeurIPS 2024. [Slides](https://github.com/eric-haibin-lin/verl-data/tree/neurips) and [video](https://neurips.cc/Expo/Conferences/2024/workshop/100677) available. - [2024/10] veRL is presented at Ray Summit. [Youtube video](https://www.youtube.com/watch?v=MrhMcXkXvJU&list=PLzTswPQNepXntmT8jr9WaNfqQ60QwW7-U&index=37) available. - [2024/08] HybridFlow (verl) is accepted to EuroSys 2025. ## Key Features - **FSDP** and **Megatron-LM** for training. - **vLLM** and **TGI** for rollout generation, **SGLang** support coming soon. - huggingface models support - Supervised fine-tuning - Reward model training - Reinforcement learning from human feedback with PPO - flash-attention integration, sequence packing - scales up to 70B models and hundreds of GPUs - experiment tracking with wandb and mlflow ## Getting Started Checkout this [Jupyter Notebook](https://github.com/volcengine/verl/tree/main/examples/ppo_trainer/verl_getting_started.ipynb) to get started with PPO training with a single 24GB L4 GPU (**FREE** GPU quota provided by [Lighting Studio](https://lightning.ai/hlin-verl/studios/verl-getting-started))! **Quickstart:** - [Installation](https://verl.readthedocs.io/en/latest/start/install.html) - [Quickstart](https://verl.readthedocs.io/en/latest/start/quickstart.html) **Running an PPO example step-by-step:** - Data and Reward Preparation - [Prepare Data (Parquet) for Post-Training](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html) - [Implement Reward Function for Dataset](https://verl.readthedocs.io/en/latest/preparation/reward_function.html) - Understanding the PPO Example - [PPO Example Architecture](https://verl.readthedocs.io/en/latest/examples/ppo_code_architecture.html) - [Config Explanation](https://verl.readthedocs.io/en/latest/examples/config.html) - [Run GSM8K Example](https://verl.readthedocs.io/en/latest/examples/gsm8k_example.html) **Reproducible algorithm baselines:** - [PPO](https://verl.readthedocs.io/en/latest/experiment/ppo.html) **For code explanation and advance usage (extension):** - PPO Trainer and Workers - [PPO Ray Trainer](https://verl.readthedocs.io/en/latest/workers/ray_trainer.html) - [PyTorch FSDP Backend](https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html) - [Megatron-LM Backend](https://verl.readthedocs.io/en/latest/index.html) - Advance Usage and Extension - [Ray API Design Tutorial](https://verl.readthedocs.io/en/latest/advance/placement.html) - [Extend to other RL(HF) algorithms](https://verl.readthedocs.io/en/latest/advance/dpo_extension.html) - [Add models with the FSDP backend](https://verl.readthedocs.io/en/latest/advance/fsdp_extension.html) - [Add models with the Megatron-LM backend](https://verl.readthedocs.io/en/latest/advance/megatron_extension.html) ## Citation and acknowledgement If you find the project helpful, please cite: - [HybridFlow: A Flexible and Efficient RLHF Framework](https://arxiv.org/abs/2409.19256v2) - [A Framework for Training Large Language Models for Code Generation via Proximal Policy Optimization](https://i.cs.hku.hk/~cwu/papers/gmsheng-NL2Code24.pdf) ```tex @article{sheng2024hybridflow, title = {HybridFlow: A Flexible and Efficient RLHF Framework}, author = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu}, year = {2024}, journal = {arXiv preprint arXiv: 2409.19256} } ``` verl is inspired by the design of Nemo-Aligner, Deepspeed-chat and OpenRLHF. The project is adopted and supported by Anyscale, Bytedance, LMSys.org, Shanghai AI Lab, Tsinghua University, UC Berkeley, UCLA, UIUC, and University of Hong Kong. ## Publications Using veRL - [Enhancing Multi-Step Reasoning Abilities of Language Models through Direct Q-Function Optimization](https://arxiv.org/abs/2410.09302) - [Flaming-hot Initiation with Regular Execution Sampling for Large Language Models](https://arxiv.org/abs/2410.21236) - [Process Reinforcement Through Implicit Rewards](https://github.com/PRIME-RL/PRIME/) We are HIRING! Send us an [email](mailto:[email protected]) if you are interested in internship/FTE opportunities in MLSys/LLM reasoning/multimodal alignment.
{ "source": "Jiayi-Pan/TinyZero", "title": "OLD_README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/OLD_README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 6480 }
# TinyZero ![image](cover.png) TinyZero is a reproduction of [DeepSeek R1 Zero](https://github.com/deepseek-ai/DeepSeek-R1) in countdown and multiplication tasks. We built upon [veRL](https://github.com/volcengine/verl). Through RL, the 3B base LM develops self-verification and search abilities all on its own You can experience the Ahah moment yourself for < $30 Twitter thread: https://x.com/jiayi_pirate/status/1882839370505621655 Full experiment log: https://wandb.ai/jiayipan/TinyZero Paper's on it's way! ## Installation ``` conda create -n zero python=3.9 # install torch [or you can skip this step and let vllm to install the correct version for you] pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu121 # install vllm pip3 install vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1 pip3 install ray # verl pip install -e . # flash attention 2 pip3 install flash-attn --no-build-isolation # quality of life pip install wandb IPython matplotlib ``` ## Countdown task **Data Preparation** ``` conda activate zero python ./examples/data_preprocess/countdown.py --local_dir {path_to_your_dataset} ``` ### Run Training ``` conda activate zero ``` For the following code, if you see Out-of-vram, try add `critic.model.enable_gradient_checkpointing=True` to the script, and checkout the discussion [here](https://github.com/Jiayi-Pan/TinyZero/issues/5#issuecomment-2624161643) **Single GPU** Works for model <= 1.5B. For Qwen2.5-0.5B base, we know it fails to learn reasoning. ``` export N_GPUS=1 export BASE_MODEL={path_to_your_model} export DATA_DIR={path_to_your_dataset} export ROLLOUT_TP_SIZE=1 export EXPERIMENT_NAME=countdown-qwen2.5-0.5b export VLLM_ATTENTION_BACKEND=XFORMERS bash ./scripts/train_tiny_zero.sh ``` **3B+ model** In this case, the base model is able to develop sophisticated reasoning skills. ``` export N_GPUS=2 export BASE_MODEL={path_to_your_model} export DATA_DIR={path_to_your_dataset} export ROLLOUT_TP_SIZE=2 export EXPERIMENT_NAME=countdown-qwen2.5-3b export VLLM_ATTENTION_BACKEND=XFORMERS bash ./scripts/train_tiny_zero.sh ``` ### Instruct Ablation We experiment with QWen-2.5-3B Instruct too. **Data Preparation** To follow chat template, we need to reprocess the data: ``` conda activate zero python examples/data_preprocess/countdown.py --template_type=qwen-instruct --local_dir={path_to_your_dataset} ``` **Training** ``` export N_GPUS=2 export BASE_MODEL={path_to_your_model} export DATA_DIR={path_to_your_dataset} export ROLLOUT_TP_SIZE=2 export EXPERIMENT_NAME=countdown-qwen2.5-3b-instruct export VLLM_ATTENTION_BACKEND=XFORMERS bash ./scripts/train_tiny_zero.sh ``` ## Acknowledge * We run our experiments based on [veRL](https://github.com/volcengine/verl). * We use Qwen2.5 series base model [Qwen2.5](https://github.com/QwenLM/Qwen2.5). ## Citation ``` @misc{tinyzero, author = {Jiayi Pan and Junjie Zhang and Xingyao Wang and Lifan Yuan and Hao Peng and Alane Suhr}, title = {TinyZero}, howpublished = {https://github.com/Jiayi-Pan/TinyZero}, note = {Accessed: 2025-01-24}, year = {2025} } ```
{ "source": "Jiayi-Pan/TinyZero", "title": "README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3127 }
# veRL documents ## Build the docs ```bash # Install dependencies. pip install -r requirements-docs.txt # Build the docs. make clean make html ``` ## Open the docs with your browser ```bash python -m http.server -d _build/html/ ``` Launch your browser and open localhost:8000.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 281 }
Welcome to veRL's documentation! ================================================ .. _hf_arxiv: https://arxiv.org/pdf/2409.19256 veRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs) post-training. It is an open source implementation of the `HybridFlow <hf_arxiv>`_ paper. veRL is flexible and easy to use with: - **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code. - **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks. - **Flexible device mapping and parallelism**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes. - Readily integration with popular HuggingFace models veRL is fast with: - **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput. - **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases. -------------------------------------------- .. _Contents: .. toctree:: :maxdepth: 5 :caption: Quickstart :titlesonly: :numbered: start/install start/quickstart .. toctree:: :maxdepth: 5 :caption: Data Preparation :titlesonly: :numbered: preparation/prepare_data preparation/reward_function .. toctree:: :maxdepth: 2 :caption: PPO Example :titlesonly: :numbered: examples/ppo_code_architecture examples/config examples/gsm8k_example .. toctree:: :maxdepth: 1 :caption: PPO Trainer and Workers workers/ray_trainer workers/fsdp_workers workers/megatron_workers .. toctree:: :maxdepth: 1 :caption: Experimental Results experiment/ppo .. toctree:: :maxdepth: 1 :caption: Advance Usage and Extension advance/placement advance/dpo_extension advance/fsdp_extension advance/megatron_extension .. toctree:: :maxdepth: 1 :caption: FAQ faq/faq Contribution ------------- veRL is free software; you can redistribute it and/or modify it under the terms of the Apache License 2.0. We welcome contributions. Join us on `GitHub <https://github.com/volcengine/verl>`_, `Slack <https://join.slack.com/t/verlgroup/shared_invite/zt-2w5p9o4c3-yy0x2Q56s_VlGLsJ93A6vA>`_ and `Wechat <https://raw.githubusercontent.com/eric-haibin-lin/verl-community/refs/heads/main/WeChat.JPG>`_ for discussions. Code formatting ^^^^^^^^^^^^^^^^^^^^^^^^ We use yapf (Google style) to enforce strict code formatting when reviewing MRs. Run yapf at the top level of verl repo: .. code-block:: bash pip3 install yapf yapf -ir -vv --style ./.style.yapf verl examples tests
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/index.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/index.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3288 }
Extend to other RL(HF) algorithms ================================= We already implemented the complete training pipeline of the PPO algorithms. To extend to other algorithms, we analyze the high-level principle to use veRL and provide a tutorial to implement the DPO algorithm. Users can follow the similar paradigm to extend to other RL algorithms. .. note:: **Key ideas**: Single process drives multi-process computation and data communication. Overall Approach ---------------- Step 1: Consider what multi-machine multi-GPU computations are needed for each model, such as ``generate_sequence`` , ``compute_log_prob`` and ``update_policy`` in the actor_rollout model. Implement distributed single-process-multiple-data (SPMD) computation and encapsulate them into APIs Step 2: Based on different distributed scenarios, including FSDP and 3D parallelism in Megatron-LM, implement single-process control of data interaction among multi-process computations. Step 3: Utilize the encapsulated APIs to implement the control flow Example: Online DPO ------------------- We use veRL to implement a simple online DPO algorithm. The algorithm flow of Online DPO is as follows: 1. There is a prompt (rollout) generator which has the same weight as the actor model. After a batch of prompts are fed into the generator, it generates N responses for each prompt. 2. Send all the prompts + responses to a verifier for scoring, which can be reward model or a rule-based function. Then sort them in pairs to form a training batch. 3. Use this training batch to train the actor model using DPO. During the process, a reference policy is needed. Step 1: What are the multi-machine multi-GPU computations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Sample Generator** Implementation details: .. code:: python from verl.single_controller.base import Worker from verl.single_controller.ray import RayWorkerGroup, RayClassWithInitArgs, RayResourcePool import ray @ray.remote class SampleGenerator(Worker): def __init__(self, config): super().__init__() self.config = config def generate_sequences(self, data): pass Here, ``SampleGenerator`` can be viewed as a multi-process pulled up by ``torchrun``, with each process running the same code (SPMD). ``SampleGenerator`` needs to implement a ``generate_sequences`` API for the control flow to call. The implementation details inside can use any inference engine including vllm, sglang and huggingface. Users can largely reuse the code in verl/verl/trainer/ppo/rollout/vllm_rollout/vllm_rollout.py and we won't go into details here. **ReferencePolicy inference** API: compute reference log probability .. code:: python from verl.single_controller.base import Worker import ray @ray.remote class ReferencePolicy(Worker): def __init__(self): super().__init__() self.model = Model() def infer(self, data): return self.model(data) **Actor update** API: Update actor model parameters .. code:: python from verl.single_controller.base import Worker import ray @ray.remote class DPOActor(Worker): def __init__(self): super().__init__() self.model = Model() self.model = FSDP(self.model) # or other distributed strategy self.optimizer = optim.Adam(self.model.parameters(), lr=1e-3) self.loss_fn = xxx def update(self, data): self.optimizer.zero_grad() logits = self.model(data) loss = self.loss_fn(logits) loss.backward() self.optimizer.step() **Notes: How to distinguish between control processes and distributed computation processes** ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Control processes are generally functions directly decorated with ``@ray.remote`` - Computation processes are all wrapped into a ``RayWorkerGroup``. Users can reuse most of the distribtued computation logics implemented in PPO algorithm, including FSDP and Megatron-LM backend in verl/verl/trainer/ppo. Step 2: Based on different distributed scenarios, implement single-process control of multi-process data interaction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **The core problem to solve here is how a single process sends data to multiple processes, drives multi-process computation, and how the control process obtains the results of multi-process computation.** First, we initialize the multi-process ``WorkerGroup`` in the control process. .. code:: python @ray.remote(num_cpus=1) def main_task(config): # construct SampleGenerator resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) # put SampleGenerator onto resource pool worker_group = RayWorkerGroup(resource_pool, ray_cls) # construct reference policy As we can see, in the control process, multiple processes are wrapped into a ``RayWorkerGroup``. Inside this ``WorkerGroup``, there is a ``self._workers`` member, where each worker is a RayActor (https://docs.ray.io/en/latest/ray-core/actors.html) of SampleGenerator. ray_trainer.md also provide an implementation of ``MegatronRayWorkerGroup``. Assuming the model is distributed using FSDP, and there is a batch of data on the control process, for data parallelism, the underlying calling process is: .. code:: python data = xxx data_list = data.chunk(dp_size) output = [] for d in data_list: # worker_group._workers[i] is a SampleGenerator output.append(worker_group._workers[i].generate_sequences.remote(d)) output = ray.get(output) output = torch.cat(output) Single process calling multiple processes involves the following 3 steps: 1. Split the data into DP parts on the control process. 2. Send the data to remote, call the remote computation through RPC, and utilize multi-process computation. 3. Obtain the computation results of each worker on the control process and merge them. Frequently calling these 3 steps on the controller process greatly hurts code readability. **In veRL, we have abstracted and encapsulated these 3 steps, so that the worker's method + dispatch + collect can be registered into the worker_group** .. code:: python from verl.single_controller.base.decorator import register def dispatch_data(worker_group, data): return data.chunk(worker_group.world_size) def collect_data(worker_group, data): return torch.cat(data) dispatch_mode = { 'dispatch_fn': dispatch_data, 'collect_fn': collect_data } @register(dispatch_mode=dispatch_mode) def generate_sequences(self, data): pass In this way, we can directly call the method inside the worker through the ``worker_group`` on the control (driver) process (which is a single process): .. code:: python output = worker_group.generate_sequences(data) This single line includes data splitting, data distribution and computation, and data collection. Furthermore, the model parallelism size of each model is usually fixed, including dp, tp, pp. So for these common distributed scenarios, we have pre-implemented specific dispatch and collect methods,in `decorator.py <https://github.com/volcengine/verl/blob/main/verl/single_controller/base/decorator.py>`_, which can be directly used to wrap the computations. .. code:: python from verl.single_controller.base.decorator import register, Dispatch @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def generate_sequences(self, data: DataProto) -> DataProto: pass Here it requires the data interface to be ``DataProto``. Definition of ``DataProto`` is in `protocol.py <https://github.com/volcengine/verl/blob/main/verl/protocol.py>`_. Step 3: Main training loop ~~~~~~~~~~~~~~~~~~~~~~~~~~ With the above training flows, we can implement the algorithm's control flow. It is recommended that ``main_task`` is also a ray remote process. .. code:: python @ray.remote(num_cpus=1) def main_task(config): # construct SampleGenerator resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) # put SampleGenerator onto resource pool sample_gen = RayWorkerGroup(resource_pool, ray_cls) # construct reference policy ray_cls = RayClassWithInitArgs(ReferencePolicy) ref_policy = RayWorkerGroup(resource_pool, ray_cls) # construct actor ray_cls = RayClassWithInitArgs(DPOActor) dpo_policy = RayWorkerGroup(resource_pool, ray_cls) dataloader = DataLoader() for data in dataloader: # generate data data = sample_gen.generate_sequences(data) # generate scores for each data data = generate_scores(data) # generate pairwise data using scores data = generate_pairwise_data(data) # generate ref_log_prob data.batch['ref_log_prob'] = ref_policy.infer(data) # update using dpo dpo_policy.update(data) # logging Here, different ``WorkerGroups`` can be placed in the same resource pool or in different resource pools using ``create_colocated_worker_cls`` similar as in `ray_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/dpo_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/dpo_extension.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 9680 }
Add models with the FSDP backend ================================== Model -------------------------- In principle, our FSDP backend can support any HF model and we can sychronoize the actor model weight with vLLM using `hf_weight_loader.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_5_4/hf_weight_loader.py>`_. However, ``hf_weight_loader`` is will gather the full state_dict of a model during synchronization, which may cause OOM. We suggest using ``dtensor_weight_loader`` which gather the full model parameter layer by layer to reduce the peak memory usage. We already support dtensor weight loader for the models below in `dtensor_weight_loader.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_5_4/dtensor_weight_loaders.py>`_.: - ``GPT2LMHeadModel`` - ``LlamaForCausalLM`` - ``LLaMAForCausalLM`` - ``MistralForCausalLM`` - ``InternLMForCausalLM`` - ``AquilaModel`` - ``AquilaForCausalLM`` - ``Phi3ForCausalLM`` - ``GemmaForCausalLM`` - ``Gemma2ForCausalLM`` - ``GPTBigCodeForCausalLM`` - ``Starcoder2ForCausalLM`` - ``Qwen2ForCausalLM`` - ``DeepseekV2ForCausalLM`` To implement ``dtensor_weight_loader`` of a model that's supported in vLLM, follow the guide of gemma model below: 1. Copy the ``load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]])`` from the vllm model class to ``dtensor_weight_loaders.py`` 2. Modify the arguments to ``(actor_weights: Dict, vllm_model: nn.Module)`` 3. Replace the ``self`` to ``vllm_model`` 4. Add the ``local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)`` before each ``param = params_dict[name]`` and modify the following weight loading using ``local_loaded_weight``. 5. Register the implemented dtensor weight loader to ``__MODEL_DTENSOR_WEIGHT_LOADER_REGISTRY__``. .. code-block:: diff - def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]): + def gemma_dtensor_weight_loader(actor_weights: Dict, vllm_model: nn.Module) -> nn.Module: stacked_params_mapping = [ # (param_name, shard_name, shard_id) ("qkv_proj", "q_proj", "q"), ("qkv_proj", "k_proj", "k"), ("qkv_proj", "v_proj", "v"), ("gate_up_proj", "gate_proj", 0), ("gate_up_proj", "up_proj", 1), ] - params_dict = dict(self.named_parameters()) + params_dict = dict(vllm_model.named_parameters()) loaded_params = set() - for name, loaded_weight in weights: + for name, loaded_weight in actor_weights.items(): for (param_name, shard_name, shard_id) in stacked_params_mapping: if shard_name not in name: continue name = name.replace(shard_name, param_name) # Skip loading extra bias for GPTQ models. if name.endswith(".bias") and name not in params_dict: continue + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight) param = params_dict[name] weight_loader = param.weight_loader - weight_loader(param, loaded_weight, shard_id) + weight_loader(param, local_loaded_weight.to(dtype=param.dtype), shard_id) break else: # lm_head is not used in vllm as it is tied with embed_token. # To prevent errors, skip loading lm_head.weight. if "lm_head.weight" in name: continue # Skip loading extra bias for GPTQ models. if name.endswith(".bias") and name not in params_dict: continue + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight) param = params_dict[name] weight_loader = getattr(param, "weight_loader", default_weight_loader) - weight_loader(param, loaded_weight) + weight_loader(param, local_loaded_weight.to(dtype=param.dtype)) loaded_params.add(name) unloaded_params = params_dict.keys() - loaded_params if unloaded_params: raise RuntimeError( "Some weights are not initialized from checkpoints: " f"{unloaded_params}")
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/fsdp_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/fsdp_extension.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4399 }
Add models with the Megatron-LM backend ========================================= Model ----------- The most challenging aspect to use the Megatron-LM backend is implementing the models for training. Currently, we implement Llama model that support data parallelism, tensor parallelism, pipeline parallelism (also vPP) and sequence parallelism. We also implement remove padding (sequence packing) on Llama model, which can be found in `modeling_llama_megatron.py <https://github.com/volcengine/verl/blob/main/verl/models/llama/megatron/modeling_llama_megatron.py>`_. To support other model, users are required to implement: 1. Implemnt a model similar to ``modeling_llama_megatron.py`` that satisfy the parallelism requirements of Megatron-LM. Then register your model in the `registry.py <https://github.com/volcengine/verl/blob/main/verl/models/registry.py>`_. 2. Checkpoint utils that can load full checkpoint (e.g. huggingface checkpoint) to partitioned models during the runtime. Then register your loader to ``weight_loader_registry`` in `weight_loader_registry.py <https://github.com/volcengine/verl/blob/main/verl/models/weight_loader_registry.py>`_. 3. Weight loader that synchronize the weight from Megatron to rollout (vLLM) model. Note that both the actor model and rollout model are partitioned during runtime. So, it's advisable to map the model name in actor model implementation. Otherwise, you may need an additional name mapping and even weight transformation. The weight loader implementation is in `megatron_weight_loaders.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_6_3/megatron_weight_loaders.py>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/megatron_extension.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/megatron_extension.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1688 }
Ray API Design Tutorial ======================================= We provide a tutorial for our Ray API design, including: - Ray basic concepts - Resource Pool and RayWorkerGroup - Data Dispatch, Execution and Collection - Initialize the RayWorkerGroup and execute the distributed computation in the given Resource Pool See details in `tutorial.ipynb <https://github.com/volcengine/verl/blob/main/examples/ray/tutorial.ipynb>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/advance/placement.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/advance/placement.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 429 }
.. _config-explain-page: Config Explaination =================== ppo_trainer.yaml for FSDP Backend --------------------------------- Data ~~~~ .. code:: yaml data: tokenizer: null train_files: ~/data/rlhf/gsm8k/train.parquet val_files: ~/data/rlhf/gsm8k/test.parquet prompt_key: prompt max_prompt_length: 512 max_response_length: 512 train_batch_size: 1024 val_batch_size: 1312 return_raw_input_ids: False # This should be set to true when the tokenizer between policy and rm differs return_raw_chat: False - ``data.train_files``: Training set parquet. Can be a list or a single file. The program will read all files into memory, so it can't be too large (< 100GB). The path can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path. - ``data.val_files``: Validation parquet. Can be a list or a single file. - ``data.prompt_key``: The field in the dataset where the prompt is located. Default is 'prompt'. - ``data.max_prompt_length``: Maximum prompt length. All prompts will be left-padded to this length. An error will be reported if the length is too long - ``data.max_response_length``: Maximum response length. Rollout in RL algorithms (e.g. PPO) generates up to this length - ``data.train_batch_size``: Batch size sampled for one training iteration of different RL algorithms. - ``data.val_batch_size``: Batch size sampled for one validation iteration. - ``data.return_raw_input_ids``: Whether to return the original input_ids without adding chat template. This is mainly used to accommodate situations where the reward model's chat template differs from the policy. It needs to be decoded first, then apply the RM's chat template. If using a model-based RM, and the policy and RM chat_templates are different, this flag needs to be set - ``data.return_raw_chat``: - ``data.truncation``: Truncate the input_ids or prompt length if they exceed max_prompt_length. Default is 'error', not allow exceed the max_prompt_length. The users should increase the max_prompt_length if throwing the error. Actor/Rollout/Reference Policy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code:: yaml actor_rollout_ref: hybrid_engine: True model: path: ~/models/deepseek-llm-7b-chat external_lib: null override_config: {} enable_gradient_checkpointing: False actor: strategy: fsdp # This is for backward-compatibility ppo_mini_batch_size: 256 ppo_micro_batch_size: 64 grad_clip: 1.0 clip_ratio: 0.2 entropy_coeff: 0.001 ppo_epochs: 1 shuffle: True optim: lr: 1e-6 lr_warmup_steps_ratio: 0. # the total steps will be injected during runtime min_lr_ratio: null # only useful for warmup with cosine warmup_style: constant # select from constant/cosine total_training_steps: -1 # must be override by program fsdp_config: wrap_policy: # transformer_layer_cls_to_wrap: None min_num_params: 0 param_offload: False grad_offload: False optimizer_offload: False ref: fsdp_config: param_offload: False wrap_policy: # transformer_layer_cls_to_wrap: None min_num_params: 0 log_prob_micro_batch_size: 128 rollout: name: vllm temperature: 1.0 top_k: -1 # 0 for hf rollout, -1 for vllm rollout top_p: 1 response_length: ${data.max_response_length} # for vllm rollout dtype: bfloat16 # should align with FSDP gpu_memory_utilization: 0.5 ignore_eos: False enforce_eager: True free_cache_engine: True load_format: dummy_dtensor # or dummy_hf or dummy_megatron tensor_model_parallel_size: 2 max_num_batched_tokens: 8192 max_num_seqs: 1024 log_prob_micro_batch_size: 128 # for vllm and hf rollout do_sample: True **Common config for actor, rollout and reference model** - ``actor_rollout_ref.hybrid_engine``: Whether it's a hybrid engine, currently only supports hybrid engine - ``actor_rollout_ref.model.path``: Huggingface model path. This can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path. - ``actor_rollout_ref.model.external_libs``: Additional Python packages that need to be imported. Used to register models or tokenizers into the Huggingface system. - ``actor_rollout_ref.model.override_config``: Used to override some of the model's original configurations, mainly dropout - ``actor_rollout_ref.model.enable_gradient_checkpointing``: Whether to enable gradient checkpointing for the actor **Actor model** - ``actor_rollout_ref.actor.strategy``: fsdp or megatron. In this example, we use fsdp backend. - ``actor_rollout_ref.actor.ppo_mini_batch_size``: One sample is split into multiple sub-batches with batch_size=ppo_mini_batch_size for PPO updates - ``actor_rollout_ref.actor.ppo_micro_batch_size``: Similar to gradient accumulation, the micro_batch_size for one forward pass, trading speed for GPU memory - ``actor_rollout_ref.actor.grad_clip``: Gradient clipping for actor updates - ``actor_rollout_ref.actor.clip_ratio``: PPO clip ratio - ``actor_rollout_ref.actor.entropy_coeff``: The weight of entropy when calculating PPO loss - ``actor_rollout_ref.actor.ppo_epochs``: Number of epochs for PPO updates on one set of sampled data - ``actor_rollout_ref.actor.shuffle``: Whether to shuffle data when there are multiple epochs - ``actor_rollout_ref.actor.optim``: Actor's optimizer parameters - ``actor_rollout_ref.actor.fsdp_config``: FSDP config for actor training - ``wrap_policy``: FSDP wrap policy. By default, it uses Huggingface's wrap policy, i.e., wrapping by DecoderLayer - No need to set transformer_layer_cls_to_wrap, so we comment it. - ``*_offload``: Whether to enable parameter, gradient and optimizer offload - Trading speed for GPU memory. **Reference Model** - ``actor_rollout_ref.ref``: FSDP config same as actor. **For models larger than 7B, it's recommended to turn on offload for ref by default** - ``actor_rollout_ref.ref.log_prob_micro_batch_size``: The batch size for one forward pass in the computation of ``ref_log_prob``. **Rollout Model** - ``actor_rollout_ref.rollout.name``: hf/vllm. We use vLLM by default because it's much efficient and our hybrid engine is implemented with vLLM. - Rollout (Auto-regressive) parameters. The key should be equal to the property name in vLLM's ``SamplingParams``. - ``temperature``, ``top_k``, ``top_p`` and others: Sampling parameters in ``SamplingParams``. - ``dtype``: Rollout model parameters type. This should be align with the actor model parameter type in FSDP/Megatron backend. - ``gpu_memory_utilization``: The proportion of the remaining GPU memory allocated for kv cache after other models have initialized when using vllm. - ``tensor_model_parallel_size``: TP size for rollout. Only effective for vllm. - ``log_prob_micro_batch_size``: Micro_batch_size (The batch size for one forward pass) for recalculating log_prob. - ``do_sample``: Whether to sample. If set to False, the rollout model will perform greedy sampling. We disable ``do_sample`` during validation. - ``actor_rollout_ref.rollout.ignore_eos``: Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. - ``actor_rollout_ref.rollout.free_cache_engine``: Offload the KVCache after rollout generation stage. Default is True. When set to True, we need to disable the usage of CUDAGraph (set ``enforce_eager`` to True.) - ``actor_rollout_ref.rollout.enforce_eager``: Whether to use CUDAGraph in vLLM generation. Default set to True to disable CUDAGraph. - ``actor_rollout_ref.rollout.load_format``: Which weight loader to use to load the actor model weights to the rollout model. - ``auto``: Use Megatron weight loader. - ``megatron``: Use Megatron weight loader. Deployed with Megatron backend. The input model ``state_dict()`` is already partitioned along TP dimension and already gathered along PP dimension. This weight loader requires that the Rollout model and Actor model's parameters shape and name should be identical. - ``dtensor``: Default solution when using Huggingface weight loader. Deployed with FSDP backend and the state_dict_type is ``StateDictType.SHARDED_STATE_DICT``. Recommend to use this weight loader - ``hf``: Use Huggingface weight loader. Deployed with FSDP backend and the state_dict_type is ``StateDictType.FULL_STATE_DICT``. This solution doesn't need to rewrite the weight loader for each model implemented in vLLM but it results in larger peak memory usage. - ``dummy_hf``, ``dummy_megatron``, ``dummy_dtensor``: Random initialization. .. note:: **NOTED**: In this config field, users only need to select from ``dummy_megatron``, ``dummy_dtensor``, ``dummy_hf`` for rollout initialization and our hybrid engine will select the corresponding weight loader (i.e., ``megatron``, ``dtensor``, ``hf``) during actor/rollout weight synchronization. Critic Model ~~~~~~~~~~~~ Most parameters for Critic are similar to Actor Model. Reward Model ~~~~~~~~~~~~ .. code:: yaml reward_model: enable: False model: input_tokenizer: ${actor_rollout_ref.model.path} # set this to null if the chat template is identical path: ~/models/Anomy-RM-v0.1 external_lib: ${actor_rollout_ref.model.external_lib} fsdp_config: min_num_params: 0 param_offload: False micro_batch_size: 64 max_length: null - ``reward_model.enable``: Whether to enable reward model. If False, we compute the reward only with the user-defined reward functions. In GSM8K and Math examples, we disable reward model. For RLHF alignment example using full_hh_rlhf, we utilize reward model to assess the responses. If False, the following parameters are not effective. - ``reward_model.model`` - ``input_tokenizer``: Input tokenizer. If the reward model's chat template is inconsistent with the policy, we need to first decode to plaintext, then apply the rm's chat_template. Then score with RM. If chat_templates are consistent, it can be set to null. - ``path``: RM's HDFS path or local path. Note that RM only supports AutoModelForSequenceClassification. Other model types need to define their own RewardModelWorker and pass it from the code. Algorithm ~~~~~~~~~ .. code:: yaml algorithm: gamma: 1.0 lam: 1.0 adv_estimator: gae kl_penalty: kl # how to estimate kl divergence kl_ctrl: type: fixed kl_coef: 0.005 - ``gemma``: discount factor - ``lam``: Trade-off between bias and variance in the GAE estimator - ``adv_estimator``: gae. Currently only supports gae, will support GRPO in the future - ``kl_penalty``\ :Support ``kl``, ``abs``, ``mse`` and ``full``.How to calculate the kl divergence between actor and reference policy. For specific options, refer to `core_algos.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/core_algos.py#L192>`_ . Trainer ~~~~~~~ .. code:: yaml trainer: total_epochs: 30 project_name: verl_examples experiment_name: gsm8k logger: ['console', 'wandb'] nnodes: 1 n_gpus_per_node: 8 save_freq: -1 test_freq: 2 critic_warmup: 0 default_hdfs_dir: ~/experiments/gsm8k/ppo/${trainer.experiment_name} # hdfs checkpoint path default_local_dir: checkpoints/${trainer.project_name}/${trainer.experiment_name} # local checkpoint path - ``trainer.total_epochs``: Number of epochs in training. - ``trainer.project_name``: For wandb - ``trainer.experiment_name``: For wandb - ``trainer.logger``: Support console and wandb - ``trainer.nnodes``: Number of nodes used in the training. - ``trainer.n_gpus_per_node``: Number of GPUs per node. - ``trainer.save_freq``: The frequency (by iteration) to save checkpoint of the actor and critic model. - ``trainer.test_freq``: The validation frequency (by iteration). - ``trainer.critic_warmup``: The number of iteration to train the critic model before actual policy learning.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/examples/config.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/config.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 12464 }
GSM8K Example ============= Introduction ------------ In this example, we train an LLM to tackle the GSM8k task. Paper: https://arxiv.org/pdf/2110.14168 Dataset: https://huggingface.co/datasets/gsm8k Note that the original paper mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RLHF agent using a rule-based reward model. Dataset Introduction -------------------- GSM8k is a math problem dataset. The prompt is an elementary school problem. The LLM model is required to answer the math problem. The training set contains 7473 samples and the test set contains 1319 samples. **An example** Prompt Katy makes coffee using teaspoons of sugar and cups of water in the ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups of water, calculate the number of teaspoonfuls of sugar she used. Solution The total ratio representing the ingredients she used to make the coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20\ *120 = <<7/20*\ 120=42>>42 #### 42 Step 1: Prepare dataset ----------------------- .. code:: bash cd examples/data_preprocess python3 gsm8k.py --local_dir ~/data/gsm8k Step 2: Download Model ---------------------- There're three ways to prepare the model checkpoints for post-training: - Download the required models from hugging face .. code:: bash huggingface-cli download deepseek-ai/deepseek-math-7b-instruct --local-dir ~/models/deepseek-math-7b-instruct --local-dir-use-symlinks False - Already store your store model in the local directory or HDFS path. - Also, you can directly use the model name in huggingface (e.g., deepseek-ai/deepseek-math-7b-instruct) in ``actor_rollout_ref.model.path`` and ``critic.model.path`` field in the run script. Noted that users should prepare checkpoints for actor, critic and reward model. [Optional] Step 3: SFT your Model --------------------------------- We provide a SFT Trainer using PyTorch FSDP in `fsdp_sft_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/fsdp_sft_trainer.py>`_. Users can customize their own SFT script using our FSDP SFT Trainer. We also provide various training scripts for SFT on GSM8K dataset in `gsm8k sft directory <https://github.com/volcengine/verl/blob/main/examples/gsm8k/sft/>`_. .. code:: shell set -x torchrun -m verl.trainer.fsdp_sft_trainer \ data.train_files=$HOME/data/gsm8k/train.parquet \ data.val_files=$HOME/data/gsm8k/test.parquet \ data.prompt_key=question \ data.response_key=answer \ data.micro_batch_size=8 \ model.partial_pretrain=deepseek-ai/deepseek-coder-6.7b-instruct \ trainer.default_hdfs_dir=hdfs://user/verl/experiments/gsm8k/deepseek-coder-6.7b-instruct/ \ trainer.project_name=gsm8k-sft \ trainer.experiment_name=gsm8k-sft-deepseek-coder-6.7b-instruct \ trainer.total_epochs=4 \ trainer.logger=['console','wandb'] Step 4: Perform PPO training with your model on GSM8K Dataset ------------------------------------------------------------- - Prepare your own run.sh script. Here's an example for GSM8k dataset and deepseek-llm-7b-chat model. - Users could replace the ``data.train_files`` ,\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on their environment. - See :doc:`config` for detailed explaination of each config field. **Reward Model/Function** We use a rule-based reward model. We force the model to produce a final answer following 4 “#” as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We compare them and assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer. **Training Script** The training script example for FSDP and Megatron-LM backend are stored in examples/ppo_trainer directory. .. code:: bash cd ../ppo_trainer bash run_deepseek7b_llm.sh The script of run_deepseek7b_llm.sh .. code:: bash set -x python3 -m verl.trainer.main_ppo \ data.train_files=~/data/rlhf/gsm8k/train.parquet \ data.val_files=~/data/rlhf/gsm8k/test.parquet \ data.train_batch_size=1024 \ data.val_batch_size=1312 \ data.max_prompt_length=512 \ data.max_response_length=512 \ actor_rollout_ref.model.path=~/models/deepseek-llm-7b-chat \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.actor.ppo_mini_batch_size=256 \ actor_rollout_ref.actor.ppo_micro_batch_size=64 \ actor_rollout_ref.actor.fsdp_config.param_offload=False \ actor_rollout_ref.actor.fsdp_config.grad_offload=False \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \ actor_rollout_ref.rollout.micro_batch_size=256 \ actor_rollout_ref.rollout.log_prob_micro_batch_size=128 \ actor_rollout_ref.rollout.tensor_model_parallel_size=2 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \ actor_rollout_ref.ref.log_prob_micro_batch_size=128 \ actor_rollout_ref.ref.fsdp_config.param_offload=True \ critic.optim.lr=1e-5 \ critic.model.path=~/models/deepseek-llm-7b-chat \ critic.model.enable_gradient_checkpointing=False \ critic.ppo_micro_batch_size=64 \ critic.model.fsdp_config.param_offload=False \ critic.model.fsdp_config.grad_offload=False \ critic.model.fsdp_config.optimizer_offload=False \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.critic_warmup=0 \ trainer.logger=['console','wandb'] \ trainer.project_name='verl_example_gsm8k' \ trainer.experiment_name='deepseek_llm_7b_function_rm' \ trainer.n_gpus_per_node=8 \ trainer.nnodes=1 \ trainer.save_freq=-1 \ trainer.total_epochs=15
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/examples/gsm8k_example.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/gsm8k_example.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 5986 }
PPO Example Architecture ======================== Let's start with the Proximal Policy Optimization algorithm, which is most widely used algorithm in LLM post-training. The main entry point of the PPO algorithm example is: `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py>`_. In this tutorial, we will go through the code architecture in `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py>`_. Define the data --------------- Users need to preprocess and store the dataset in parquet files. And we implement `RLHFDataset` to load and tokenize the parquet files. For ``RLHFDataset`` (Default), at least 1 fields are required: - ``prompt``: Contains the string prompt We already provide some examples of processing the datasets to parquet files in `data_preprocess directory <https://github.com/volcengine/verl/blob/main/examples/data_preprocess>`_. Currently, we support preprocess of GSM8k, MATH, Hellasage, Full_hh_rlhf datasets. See :doc:`../preparation/prepare_data` for more information. Define the reward functions for different datasets -------------------------------------------------- In this main entry point, the users only need to define their own reward function based on the datasets (or applications) utilized in PPO training. For example, we already provide reward functions for `GSM8k <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/gsm8k.py>`_ and `MATH <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py>`_ datasets in the ``_select_rm_score_fn``. In the ``RewardManager``, we will compute the reward score based on the data_source to select corresponding reward functions. For some RLHF datasets (e.g., full_hh_rlhf), the reward model is utilized to assess the responses without any reward functions. In this case, the ``RewardManager`` will return the ``rm_score`` computed by the reward model directly. See `reward functions <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_ for detailed implementation. Define worker classes --------------------- .. code:: python if config.actor_rollout_ref.actor.strategy == 'fsdp': # for FSDP backend assert config.actor_rollout_ref.actor.strategy == config.critic.strategy from verl.workers.fsdp_workers import ActorRolloutRefWorker, CriticWorker from verl.single_controller.ray import RayWorkerGroup ray_worker_group_cls = RayWorkerGroup elif config.actor_rollout_ref.actor.strategy == 'megatron': # for Megatron backend assert config.actor_rollout_ref.actor.strategy == config.critic.strategy from verl.workers.megatron_workers import ActorRolloutRefWorker, CriticWorker from verl.single_controller.ray.megatron import NVMegatronRayWorkerGroup ray_worker_group_cls = NVMegatronRayWorkerGroup # Ray worker class for Megatron-LM else: raise NotImplementedError from verl.trainer.ppo.ray_trainer import ResourcePoolManager, Role role_worker_mapping = { Role.ActorRollout: ActorRolloutRefWorker, Role.Critic: CriticWorker, Role.RefPolicy: ActorRolloutRefWorker } global_pool_id = 'global_pool' resource_pool_spec = { global_pool_id: [config.trainer.n_gpus_per_node] * config.trainer.nnodes, } mapping = { Role.ActorRollout: global_pool_id, Role.Critic: global_pool_id, Role.RefPolicy: global_pool_id, } Step 1: Construct the mapping between roles and workers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A role represents a group of workers in the same process. We have pre-defined several roles in `ray_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py#L38>`_. .. code:: python class Role(Enum): """ To create more roles dynamically, you can subclass Role and add new members """ Actor = 0 # This worker only has Actor Rollout = 1 # This worker only has Rollout ActorRollout = 2 # This worker has both actor and rollout, it's a HybridEngine Critic = 3 # This worker only has critic RefPolicy = 4 # This worker only has reference policy RewardModel = 5 # This worker only has reward model ActorRolloutRef = 6 # This worker contains actor, rollout and reference policy simultaneously Step 2: Define the worker class corresponding to this role ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - We have pre-implemented the ``ActorRolloutRefWorker``. Through different configs, it can be a standalone actor, a standalone rollout, an ActorRollout HybridEngine, or an ActorRolloutRef HybridEngine - We also pre-implemented workers for ``Actor``, ``Rollout``, ``Critic``, ``Reward Model`` and ``Reference model`` on two different backend: PyTorch FSDP and Megatron-LM. See `FSDP Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py>`_ and `Megatron-LM Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py>`_ for more information. Step 3: Define resource pool id and resource pool spec ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Resource pool is a division of global GPU resources, ``resource_pool_spec`` is a dict, mapping from id to # of GPUs - In the above example, we defined a global resource pool: global_pool_id, and then put all roles on this one resource pool with all the GPUs in this post-training task. This refers to *co-locate* placement where all the models share the same set of GPUs. - See resource pool and placement for advance usage. Defining reward model/function ------------------------------ .. code:: python # we should adopt a multi-source reward function here # - for rule-based rm, we directly call a reward score # - for model-based rm, we call a model # - for code related prompt, we send to a sandbox if there are test cases # - finally, we combine all the rewards together # - The reward type depends on the tag of the data if config.reward_model.enable: from verl.workers.fsdp_workers import RewardModelWorker role_worker_mapping[Role.RewardModel] = RewardModelWorker mapping[Role.RewardModel] = global_pool_id reward_fn = RewardManager(tokenizer=tokenizer, num_examine=0) # Note that we always use function-based RM for validation val_reward_fn = RewardManager(tokenizer=tokenizer, num_examine=1) resource_pool_manager = ResourcePoolManager(resource_pool_spec=resource_pool_spec, mapping=mapping) Since not all tasks use model-based RM, users need to define here whether it's a model-based RM or a function-based RM - If it's a model-based RM, directly add the ``RewardModel`` role in the resource mapping and add it to the resource pool mapping. - Note that the pre-defined ``RewardModelWorker`` only supports models with the structure of huggingface ``AutoModelForSequenceClassification``. If it's not this model, you need to define your own RewardModelWorker in `FSDP Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py>`_ and `Megatron-LM Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py>`_. - If it's a function-based RM, the users are required to classified the reward function for each datasets. .. code:: python def _select_rm_score_fn(data_source): if data_source == 'openai/gsm8k': return gsm8k.compute_score elif data_source == 'lighteval/MATH': return math.compute_score else: raise NotImplementedError See reward functions implemented in `directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/>`_ for more information. Define, init and run the PPO Trainer ------------------------------------ .. code:: python trainer = RayPPOTrainer(config=config, tokenizer=tokenizer, role_worker_mapping=role_worker_mapping, resource_pool_manager=resource_pool_manager, ray_worker_group_cls=ray_worker_group_cls, reward_fn=reward_fn, val_reward_fn=val_reward_fn) trainer.init_workers() trainer.fit() - We first initialize the ``RayPPOTrainer`` with user config, tokenizer and all the above worker mapping, resource pool, worker group and reward functions - We first call the ``trainer.init_workers()`` to initialize the models on the allocated GPUs (in the resource pool) - The actual PPO training will be executed in ``trainer.fit()`` veRL can be easily extended to other RL algorithms by reusing the Ray model workers, resource pool and reward functions. See :doc:`extension<../advance/dpo_extension>` for more information. Details of the ``RayPPOTrainer`` is discussed in :doc:`Ray Trainer<../workers/ray_trainer>`.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/examples/ppo_code_architecture.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/examples/ppo_code_architecture.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 9044 }
.. _algo-baseline-page: Algorithm Baselines =================== GSM8k ------------------ Assuming GSM8k dataset is preprocess via ``python3 examples/data_preprocess/gsm8k.py`` Refer to the table below to reproduce PPO training from different pre-trained models. .. _Huggingface: https://huggingface.co/google/gemma-2-2b-it#benchmark-results .. _SFT Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-sft-0.411.log .. _SFT+PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-ppo-bsz512_4-prompt1024-resp-512-0.640.log .. _wandb: https://api.wandb.ai/links/verl-team/h7ux8602 .. _Qwen Blog: https://qwenlm.github.io/blog/qwen2.5-llm/ .. _PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/Qwen2.5-0.5B-bsz256_2-prompt1024-resp512-0.567.log +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Model | Method | Test score | Details | +============================+========================+============+=====================+=========================================================================+ | google/gemma-2-2b-it | pretrained checkpoint | 23.9 | `Huggingface`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | google/gemma-2-2b-it | SFT | 52.06 | `SFT Command and logs`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | google/gemma-2-2b-it | SFT + PPO | 64.02 | `SFT+PPO Command and logs`_, `wandb`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Qwen/Qwen2.5-0.5B-Instruct | pretrained checkpoint | 36.4 | `Qwen Blog`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Qwen/Qwen2.5-0.5B-Instruct | PPO | 56.7 | `PPO Command and logs`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/experiment/ppo.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/experiment/ppo.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 3029 }
Frequently Asked Questions ==================================== Ray related ------------ How to add breakpoint for debugging with distributed Ray? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Please checkout the official debugging guide from Ray: https://docs.ray.io/en/latest/ray-observability/ray-distributed-debugger.html Distributed training ------------------------ How to run multi-node post-training with Ray? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You can start a ray cluster and submit a ray job, following the official guide from Ray: https://docs.ray.io/en/latest/ray-core/starting-ray.html
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/faq/faq.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/faq/faq.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 798 }
Prepare Data (Parquet) for Post-Training ======================================== Before starting the post-training job, we need to prepare the data for the policy training. The data should be stored in the parquet format. We provide several data preprocess scripts for different datasets, including GSM8K, MATH, HelloSwag, Full_hh_rlhf. To prepare other datasets, we need to follow the following steps: The data preprocess script can be divided into two parts: 1. The first part is the common part, which loads the dataset from huggingface's ``datasets`` package. Then preprocess the datasets with the ``make_map_fn`` and then store in the parquet format. .. code:: python import re import os import datasets from verl.utils.hdfs_io import copy, makedirs import argparse # To extract the solution for each prompts in the dataset # def extract_solution(solution_str): # ... if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--local_dir', default='/opt/tiger/gsm8k') parser.add_argument('--hdfs_dir', default=None) args = parser.parse_args() num_few_shot = 5 data_source = 'openai/gsm8k' dataset = datasets.load_dataset(data_source, 'main') train_dataset = dataset['train'] test_dataset = dataset['test'] # Construct a `def make_map_fn(split)` for the corresponding datasets. # ... train_dataset = train_dataset.map(function=make_map_fn('train'), with_indices=True) test_dataset = test_dataset.map(function=make_map_fn('test'), with_indices=True) local_dir = args.local_dir hdfs_dir = args.hdfs_dir train_dataset.to_parquet(os.path.join(local_dir, 'train.parquet')) test_dataset.to_parquet(os.path.join(local_dir, 'test.parquet')) makedirs(hdfs_dir) copy(src=local_dir, dst=hdfs_dir) 2. The users are required to implement the ``make_map_fn()`` function (as well as the ``extract_solution``) on their own to support different datasets or tasks. We already implemented the data preprocess of GSM8k, MATH, Hellaswag and Full_hh_rlhf datasets. And we take the GSM8k dataset as an example: **GSM8K** In the ``make_map_fn``, each data field should consist of the following 5 fields: 1. ``data_source``: The name of the dataset. To index the corresponding reward function in the ``RewardModule`` 2. ``prompt``: This field should be constructed in the format of huggingface chat_template. The tokenizer in ``RLHFDataset`` will apply chat template and tokenize the prompt. 3. ``ability``: Define the task category. 4. ``reward_model``: Currently, we only utilize the ``ground_truth`` field during evaluation. The ``ground_truth`` is computed by the ``extract_solution`` function. **NOTED** that the implementation of the corresponding reward function should align with this extracted ``ground_truth``. 5. ``extra_info``: Record some information of the current prompt. Not use for now. .. code:: python def extract_solution(solution_str): solution = re.search("#### (\\-?[0-9\\.\\,]+)", solution_str) # extract the solution after #### assert solution is not None final_solution = solution.group(0) final_solution = final_solution.split('#### ')[1].replace(',', '') return final_solution instruction_following = "Let's think step by step and output the final answer after \"####\"." # add a row to each data item that represents a unique id def make_map_fn(split): def process_fn(example, idx): question = example.pop('question') question = question + ' ' + instruction_following answer = example.pop('answer') solution = extract_solution(answer) data = { "data_source": data_source, "prompt": [{ "role": "user", "content": question }], "ability": "math", "reward_model": { "style": "rule", "ground_truth": solution }, "extra_info": { 'split': split, 'index': idx } } return data return process_fn
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/preparation/prepare_data.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/preparation/prepare_data.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4335 }
Implement Reward Function for Dataset ====================================== For each dataset, we need to implement a reward function or utilize a reward model to compute the rewards for the generated responses. We already pre-implemented some reward functions in `reward_score directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_. Currently, we support reward functions for GSM8k and MATH datasets. For RLHF datasets (e.g., full_hh_rlhf) and Code Generation (e.g., APPS), we utilize reward model and SandBox (will opensource soon) for evaluation respectively. RewardManager ------------- In the entrypoint of the PPO Post-Training script `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py#L33>`_, we implement a ``RewardManager`` that utilze pre-implemented reward functions to compute the scores for each response. In the ``RewardManager``, we implemented a ``__call__`` function to compute the score for each response. All the reward functions are executed by ``compute_score_fn``. The input is a ``DataProto``, which includes: - ``input_ids``, ``attention_mask``: ``input_ids`` and ``attention_mask`` after applying chat_template, including prompt and response - ``responses``: response tokens - ``ground_truth``: The ground truth string of the current prompt. Stored in ``non_tensor_batch`` in the ``DataProto``, which should be preprocessed in the parquet files. - ``data_source``: The dataset name of the current prompt. Stored in ``non_tensor_batch`` in the ``DataProto``, which should be preprocessed in the parquet files. After detokenize the responses, the responses string and the ground truth string will be input to the ``compute_score_fn`` to compute the score for each response. Reward Functions ---------------- We already pre-implemented some reward functions in `reward_score directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_. - In the `GSM8k example <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/gsm8k.py>`_, we force the response to output the final answer after four ####, then use string matching to compare with the ground truth. If completely correct, score 1 point; if the format is correct, score 0.1 points; if the format is incorrect, score 0 points. - In the `MATH example <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py>`_, we follow the implementation in `lm-evaluation-harness repository <https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/hendrycks_math/utils.py>`_.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/preparation/reward_function.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/preparation/reward_function.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 2605 }
Installation ============ Requirements ------------ - **Python**: Version >= 3.9 - **CUDA**: Version >= 12.1 veRL supports various backends. Currently, the following configurations are available: - **FSDP** and **Megatron-LM** (optional) for training. - **vLLM** adn **TGI** for rollout generation, **SGLang** support coming soon. Training backends ------------------ We recommend using **FSDP** backend to investigate, research and prototype different models, datasets and RL algorithms. The guide for using FSDP backend can be found in `PyTorch FSDP Backend <https://verl.readthedocs.io/en/latest/workers/fsdp_workers.html>`_. For users who pursue better scalability, we recommend using **Megatron-LM** backend. Currently, we support Megatron-LM@core_v0.4.0 with some internal patches (soon be updated to latest version directly relying on upstream Megatron-LM). The guide for using Megatron-LM backend can be found in `Megatron-LM Backend <https://verl.readthedocs.io/en/latest/workers/megatron_workers.html>`_. Install from docker image ------------------------- We provide pre-built Docker images for quick setup. Image and tag: ``verlai/verl:vemlp-th2.4.0-cu124-vllm0.6.3-ray2.10-te1.7-v0.0.3``. See files under ``docker/`` if you want to build your own image. 1. Launch the desired Docker image: .. code:: bash docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v <image:tag> 2. Inside the container, install veRL: .. code:: bash # install the nightly version (recommended) git clone https://github.com/volcengine/verl && cd verl && pip3 install -e . # or install from pypi via `pip3 install verl` 3. Setup Megatron (optional) If you want to enable training with Megatron, Megatron code must be added to PYTHONPATH: .. code:: bash cd .. git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git cp verl/patches/megatron_v4.patch Megatron-LM/ cd Megatron-LM && git apply megatron_v4.patch pip3 install -e . export PYTHONPATH=$PYTHONPATH:$(pwd) You can also get the Megatron code after verl's patch via .. code:: bash git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM Install from custom environment --------------------------------- To manage environment, we recommend using conda: .. code:: bash conda create -n verl python==3.9 conda activate verl For installing the latest version of veRL, the best way is to clone and install it from source. Then you can modify our code to customize your own post-training jobs. .. code:: bash # install verl together with some lightweight dependencies in setup.py git clone https://github.com/volcengine/verl.git cd verl pip3 install -e . You can also install veRL using ``pip3 install`` .. code:: bash # directly install from pypi pip3 install verl Dependencies ------------ veRL requires Python >= 3.9 and CUDA >= 12.1. veRL support various backend, we currently release FSDP and Megatron-LM for actor training and vLLM for rollout generation. The following dependencies are required for all backends, PyTorch FSDP and Megatron-LM. The pros, cons and extension guide for using PyTorch FSDP backend can be found in :doc:`FSDP Workers<../workers/fsdp_workers>`. .. code:: bash # install torch [or you can skip this step and let vllm to install the correct version for you] pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121 # install vllm pip3 install ray vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1 # flash attention 2 pip3 install flash-attn --no-build-isolation For users who pursue better scalability, we recommend using Megatron-LM backend. Please install the above dependencies first. Currently, we support Megatron-LM\@core_v0.4.0 and we fix some internal issues of Megatron-LM. Here's the additional installation guide (optional). The pros, cons and extension guide for using Megatron-LM backend can be found in :doc:`Megatron-LM Workers<../workers/megatron_workers>`. .. code:: bash # Megatron-LM Backend (optional) # apex pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \ --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" \ git+https://github.com/NVIDIA/apex # transformer engine pip3 install git+https://github.com/NVIDIA/[email protected] # megatron core v0.4.0: clone and apply the patch # You can also get the patched Megatron code patch via # git clone -b core_v0.4.0_verl https://github.com/eric-haibin-lin/Megatron-LM cd .. git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM cp ../verl/patches/megatron_v4.patch . git apply megatron_v4.patch pip3 install -e . export PYTHONPATH=$PYTHONPATH:$(pwd)
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/start/install.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/start/install.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4914 }
.. _quickstart: ========================================================= Quickstart: Post-train a LLM using PPO with GSM8K dataset ========================================================= Post-train a LLM using GSM8K dataset =================================================================== Introduction ------------ .. _hf_dataset_gsm8k: https://huggingface.co/datasets/gsm8k In this example, we train an LLM to tackle the `GSM8k <hf_dataset_gsm8k>`_ task with function-based rewards. [1]_ Prerequisite: - the latest version of ``verl`` and its dependencies installed following the installation guide. Using the docker image is recommended. - an GPU with at least 24 GB HBM Dataset Introduction -------------------- GSM8k is a math problem dataset. The prompt is an elementary school problem. The LLM model is asked to solve the math problem. Below is an example: Prompt Katy makes coffee using teaspoons of sugar and cups of water in the ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups of water, calculate the number of teaspoonfuls of sugar she used. Solution The total ratio representing the ingredients she used to make the coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20\ *120 = <<7/20*\ 120=42>>42 #### 42 Step 1: Prepare the dataset ---------------------------- We preprocess the dataset in parquet format so that (1) it contains necessary fields for computing RL rewards and (2) is faster to read. .. code-block:: bash python3 examples/data_preprocess/gsm8k.py --local_dir ~/data/gsm8k Step 2: Download a model for post-training ------------------------------------------- Usually we recommend starting with an "instruct" model variant so that the model follows instructions. In this example, we start with the ``Qwen2.5-0.5B-Instruct`` model. If you start from a "base" model variant, doing SFT before RL is recommended. Refer to the `sft directory <https://github.com/volcengine/verl/blob/main/examples/gsm8k/sft/>`_ and `SFT Trainer <https://github.com/volcengine/verl/blob/main/verl/trainer/fsdp_sft_trainer.py>`_ for further details. .. code-block:: bash python3 -c "import transformers; transformers.pipeline('text-generation', model='Qwen/Qwen2.5-0.5B-Instruct')" Step 3: Perform PPO training with the instruct model ---------------------------------------------------------------------- **Reward Model/Function** We use a pre-defined rule-based reward model. We force the model to produce a final answer following 4 “#” as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer. For mode details, please refer to `verl/utils/reward_score/gsm8k.py <https://github.com/volcengine/verl/blob/v0.1/verl/utils/reward_score/gsm8k.py>`_. **Training Script** Now let's run PPO training with the dataset and model above. [2]_ Set the ``data.train_files`` ,\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on your dataset and model names or paths. .. code-block:: bash PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \ data.train_files=$HOME/data/gsm8k/train.parquet \ data.val_files=$HOME/data/gsm8k/test.parquet \ data.train_batch_size=256 \ data.val_batch_size=1312 \ data.max_prompt_length=512 \ data.max_response_length=256 \ actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.actor.ppo_mini_batch_size=64 \ actor_rollout_ref.actor.ppo_micro_batch_size=4 \ actor_rollout_ref.rollout.log_prob_micro_batch_size=8 \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \ actor_rollout_ref.ref.log_prob_micro_batch_size=4 \ critic.optim.lr=1e-5 \ critic.model.path=Qwen/Qwen2.5-0.5B-Instruct \ critic.ppo_micro_batch_size=4 \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.logger=['console'] \ +trainer.val_before_train=False \ trainer.default_hdfs_dir=null \ trainer.n_gpus_per_node=1 \ trainer.nnodes=1 \ trainer.save_freq=10 \ trainer.test_freq=10 \ trainer.total_epochs=15 2>&1 | tee verl_demo.log You are expected to see the following logs, indicating training in progress. The key metric ``val/test_score/openai/gsm8k`` is computed every ``trainer.test_freq`` steps: .. code-block:: bash step:0 - timing/gen:21.470 - timing/ref:4.360 - timing/values:5.800 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.109 - timing/update_critic:15.664 - critic/vf_loss:14.947 - critic/vf_clipfrac:0.000 - critic/vpred_mean:-2.056 - critic/grad_norm:1023.278 - critic/lr(1e-4):0.100 - timing/update_actor:20.314 - actor/entropy_loss:0.433 - actor/pg_loss:-0.005 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:1.992 - actor/lr(1e-4):0.010 - critic/score/mean:0.004 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.004 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.000 - critic/advantages/max:2.360 - critic/advantages/min:-2.280 - critic/returns/mean:0.003 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.045 - critic/values/max:9.500 - critic/values/min:-14.000 - response_length/mean:239.133 - response_length/max:256.000 - response_length/min:77.000 - prompt_length/mean:104.883 - prompt_length/max:175.000 - prompt_length/min:68.000 step:1 - timing/gen:23.020 - timing/ref:4.322 - timing/values:5.953 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.118 - timing/update_critic:15.646 - critic/vf_loss:18.472 - critic/vf_clipfrac:0.384 - critic/vpred_mean:1.038 - critic/grad_norm:942.924 - critic/lr(1e-4):0.100 - timing/update_actor:20.526 - actor/entropy_loss:0.440 - actor/pg_loss:0.000 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.060 - actor/lr(1e-4):0.010 - critic/score/mean:0.000 - critic/score/max:0.000 - critic/score/min:0.000 - critic/rewards/mean:0.000 - critic/rewards/max:0.000 - critic/rewards/min:0.000 - critic/advantages/mean:0.000 - critic/advantages/max:2.702 - critic/advantages/min:-2.616 - critic/returns/mean:0.000 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.280 - critic/values/max:11.000 - critic/values/min:-16.000 - response_length/mean:232.242 - response_length/max:256.000 - response_length/min:91.000 - prompt_length/mean:102.398 - prompt_length/max:185.000 - prompt_length/min:70.000 Checkout :ref:`algo-baseline-page` for full training and validation logs for reference. The checkpoint is saved at the following dir by default: ``checkpoints/${trainer.project_name}/${trainer.experiment_name}`` To enable ``wandb`` for experiment tracking, set the following configs: .. code-block:: bash trainer.logger=['console','wandb'] \ trainer.project_name=$YOUR_PROJECT_NAME \ trainer.experiment_name=$YOUR_RUN_NAME \ If you encounter out of memory issues with HBM less than 32GB, enable the following configs would help: .. code-block:: bash actor_rollout_ref.actor.ppo_micro_batch_size=1 \ critic.ppo_micro_batch_size=1 \ For the full set of configs, please refer to :ref:`config-explain-page` for detailed explaination and performance tuning. .. [1] The original paper (https://arxiv.org/pdf/2110.14168) mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RL agent using a rule-based reward model. .. [2] More training script examples for FSDP and Megatron-LM backend are stored in `examples/ppo_trainer <https://github.com/volcengine/verl/tree/main/examples/ppo_trainer>`_ directory.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/start/quickstart.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/start/quickstart.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 7899 }
PyTorch FSDP Backend ====================== We support PyTorch FSDP Backend by implementing various workers for actor, critic, reference, rollout and reward models. We also implement the ``FSDPVLLMShardingManager`` that reshard weight between FSDP and vLLM in `fsdp_vllm.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/hybrid_engine/fsdp_vllm.py>`_. **Pros** - Readily support various models. - Users only need to implement the corresponding ``dtensor_weight_loader`` for weight synchronization between FSDP and vLLM. While for ``hf_weight_loader``, users can directly apply any models supported both in HF and vLLM without any code change. - Easy to organize the forward and backward computation for each model. **Cons** - Poor scalability when it comes to large-scale models (e.g. Llama 70B and 405B) - The resharding overhead between actor and rollout could be larger than Megatron-LM backend. Due to the simplicity, we recommend using FSDP backend for algorithm research and prototyping. FSDP Workers -------------- ActorRolloutRefWorker ^^^^^^^^^^^^^^^^^^^^^ Actor/Rollout HybridEngine '''''''''''''''''''''''''' 1. HybridEngine, Actor and Rollout initialization API. .. code:: python @register(dispatch_mode=Dispatch.ONE_TO_ALL) def init_model(self): ``ONE_TO_ALL``: when calling the ``init_model`` function from the driver process, each worker (on a GPU) will execute the following model initialization process. The initialization details of HybridEngine, Actor and Rollout are highlighted below: 1. ``DataParallelPPOActor`` implements the simple PPO computation logics when the model is built with FSDP, including compute log prob, model update. 2. ``vLLMRollout`` support generation with vLLM. We modify the vLLM Engine and make it executed under SPMD to fit into our ``WorkerGroup`` design. 3. ``FSDPVLLMShardingManager`` a context manager to perform actual resharding between actor and rollout. See `source code <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py#L42>`_. for more information. 1. Generate sequence and recompute log prob .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def generate_sequences(self, prompts: DataProto): - ``Dispatch.DP_COMPUTE_PROTO``: The data will be dispatched and collected along the DP dimension - In this function, the rollout model will perform auto-regressive generation and the actor model will recompute the old log prob for the generetad response. 3. Update actor model .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def update_actor(self, data: DataProto): - Update the actor model weight using PPO & entropy loss. ReferenceModel '''''''''''''' 1. Reference model initialization The reference model is initialized using the same function as the actor model without initializing the HybridEngine and Optimizer. Then the actor model is also wrapped by the ``DataParallelPPOActor``. 2. Compute reference log prob .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_ref_log_prob(self, data: DataProto): - In this function, the reference model will call the compute log prob function in ``DataParallelPPOActor`` to compute the reference log prob. CriticWorker and RewardWorker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Model initialization Quite similar to reference model. The CriticWorker will perform additional initialization for the Optimizer. 2. Compute Values for CriticWorker .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_values(self, data: DataProto): 3. Update Critic .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def update_critic(self, data: DataProto): 4. Compute Reward .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_rm_score(self, data: DataProto): HybridShard ------------ We didn't support FSDP `HybridShard`. To support this, we may need to construct a 2D device mesh and test the corresponding ``dtensor_weight_loader`` and ``hf_weight_loader`` for each model.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/workers/fsdp_workers.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/fsdp_workers.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 4166 }
Megatron-LM Backend ===================== We support Megatron Backend by implementing various workers for actor, critic, reference, rollout and reward models. We also implement the ``3DHybridEngine`` using Megatron-LM and vLLM in `megatron_vllm.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/hybrid_engine/megatron_vllm.py>`_. **Pros** - Support 3D parallelism and sequence parallelism for best scalablility and throughput. - 3D HybridEngine can significantly reduce peak memory usage and reduce weight synchronize overhead between actor and rollout. **Cons** - Users should implement their own models for Megatron-LM - Users should implement the corresponding weight_loader to - synchronize the model weight between actor (in Megatron) and rollout (in vLLM). - load weights from checkpoints to corresponding model in Megatron-LM Megatron Workers ---------------- MegatronWorker ^^^^^^^^^^^^^^ ``MegatronWorker`` is the base class of different megatron worker classes. In this class, ``get_megatron_global_info`` and ``get_megatron_rank_info`` function to retrive the 3D parallel world size and rank of each ``Worker`` running on specific GPU. These information will be used in transfer protocol for Megatron Backend. The following ``Worker`` class for different models will be utilized to construct the ``WorkerGroup`` . We implement various of APIs for each ``Worker`` class decorated by the ``@register(dispatch_mode=)`` . These APIs can be called by the ray driver process. The data can be correctly collect and dispatch following the ``dispatch_mode`` on each function. The supported dispatch_model (i.e., transfer protocols) can be found in `decorator.py <https://github.com/volcengine/verl/blob/main/verl/single_controller/base/decorator.py>`_. ActorRolloutRefWorker ^^^^^^^^^^^^^^^^^^^^^ This class is implemented for Actor/Rollout HybridEngine or for the reference model to initialize their model and perform computation. Actor/Rollout HybridEngine '''''''''''''''''''''''''' 1. HybridEngine, Actor and Rollout initialization API. .. code:: python @register(dispatch_mode=Dispatch.ONE_TO_ALL) def init_model(self): ``ONE_TO_ALL``: when calling the ``init_model`` function from the driver process, each worker (on a GPU) will execute the following model initialization process. The initialization details of HybridEngine, Actor and Rollout are highlighted below: 1. ``AllGatherPPModel`` holds memory buffer for both Actor and Rollout and support weight resharding between actor and rollout. 2. ``MegatronPPOActor`` implements the simple PPO computation logics when the model is built with Megatron, including compute log prob, model update. 3. ``vLLMRollout`` support generation with vLLM. We modify the vLLM Engine and make it executed under SPMD to fit into our ``WorkerGroup`` design. 4. ``MegatronVLLMShardingManager`` a context manager to perform actual resharding between actor and rollout. See `source code <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py#L63>`_ for more information. .. code:: python # Initialize the 3D HybridEngine hybrid_engine = AllGatherPPModel(model_provider=megatron_actor_model_provider) # Fetch the model at current rank actor_module = hybrid_engine.this_rank_models ... # build actor model self.actor = MegatronPPOActor(config=self.config.actor, model_config=self.actor_model_config, megatron_config=megatron_config, actor_module=self.actor_module, actor_optimizer=self.actor_optimizer, actor_optimizer_config=self.actor_optim_config) # build rollout # rollout initialization rollout = vLLMRollout(actor_module=params, config=self.config.rollout, tokenizer=self.tokenizer, model_hf_config=self.actor_model_config, train_tp=mpu.get_tensor_model_parallel_world_size()) # perform weight resharding between actor and rollout sharding_manager = MegatronVLLMShardingManager(module=self.hybrid_engine, inference_engine=rollout.inference_engine, model_config=self.actor_model_config, layer_name_mapping=layer_name_mapping) ... 2. Generate sequence and recompute log prob .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_PP_AS_DP_PROTO) def generate_sequences(self, prompts: DataProto): - ``Dispatch.MEGATRON_PP_AS_DP_PROTO``: The PP dimension of the actor model will be regarded as DP dimension. Then the driver process will dispatch and collect the data according to this reorganization. This is because, in HybridEngine, the actor weight, which usually applied larger 3D parallel sizes, will be gathered along the PP dimension and TP dimension. Therefore, the corresponding data should be dispatched and collected through the 3D parallel group of the rollout model, rather than the actor model. However, the world_size and rank information can only be retrived from ``get_megatron_global_info`` and ``get_megatron_rank_info``, which records the 3D information for the actor model. Moreover, the data resharding inside TP dimension will be processed within the HybridEngine. - In this function, the rollout model will perform auto-regressive generation and the actor model will recompute the old log prob for the generetad response. 3. Update actor model .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def update_actor(self, data: DataProto): - ``Dispatch.MEGATRON_COMPUTE_PROTO``: User passes the data partitioned by DP dimension. The data is dispatched to all tp/pp ranks within the same dp group, and ultimately only collects output data from tp=0 and the last pp. - Update the actor model weight using PPO & entropy loss. ReferenceModel '''''''''''''' 1. Reference model initialization The reference model is initialized using the same function as the actor model without initializing the HybridEngine and Optimizer. Then the actor model is also wrapped by the ``MegatronPPOActor``. 2. Compute reference log prob .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_ref_log_prob(self, data: DataProto): - In this function, the reference model will call the compute log prob function in ``MegatronPPOActor`` to compute the reference log prob. CriticWorker and RewardWorker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Model initialization Quite similar to reference model. The CriticWorker will perform additional initialization for the Optimizer. 2. Compute Values for CriticWorker .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_values(self, data: DataProto): 3. Update Critic .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def update_critic(self, data: DataProto): 4. Compute Reward .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_rm_score(self, data: DataProto): Context Parallel ---------------- This require the developer/contributor to implement the context parallel both in Megatron-LM and models.
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/workers/megatron_workers.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/megatron_workers.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 7477 }
PPO Ray Trainer =============== We implement the RayPPOTrainer, which is a trainer runs on the driver process on a single CPU/GPU node (default is CPU). The PPORayTrainer include 3 core functions for data preparation, WorkerGroup initialization and PPO training loop. Data Preparation ---------------- The ``PPORayTrainer``, as a single process, is responsible for loading a complete batch of samples (prompts) from the dataset and then dispatch to different worker_groups runnning on different GPUs. To generalize the data loading, we implement the ``RLHFDataset`` class to load the preprocessed parquet files, apply chat templates to the prompts, add padding, truncate prompts that exceed max prompt length and then tokenize. .. code:: python self.train_dataset = RLHFDataset(parquet_files=self.config.data.train_files, tokenizer=self.tokenizer, prompt_key=self.config.data.prompt_key, max_prompt_length=self.config.data.max_prompt_length, filter_prompts=True, return_raw_chat=self.config.data.get('return_raw_chat', False), truncation='error') Then, the dataloader will iterate the dataset under PPO mini batch size. WorkerGroup Initialization -------------------------- We first introduce a basic implementation of initializing the ``WorkerGroup`` of the actor model on a given set of GPUs. .. code:: python # max_colocate_count means the number of WorkerGroups (i.e. processes) in each RayResourcePool # For FSDP backend, we recommend using max_colocate_count=1 that merge all WorkerGroups into one. # For Megatron backend, we recommend using max_colocate_count>1 that can utilize different WorkerGroup for differnt models resource_pool = RayResourcePool(process_on_nodes=[config.trainer.n_gpus_per_node] * config.trainer.nnodes, use_gpu=True, max_colocate_count=1) # define actor rollout cls to be init on remote actor_rollout_cls = RayClassWithInitArgs(cls=ActorRolloutWorker) # define actor_rollout worker group actor_rollout_worker_group = MegatronRayWorkerGroup(resource_pool=resource_pool, ray_cls_with_init=actor_rollout_cls, default_megatron_kwargs=config.actor_rollout.megatron) Different WorkerGroups, like ``actor_rollout_worker_group`` , ``critic_worker_group`` and ``ref_worker_group`` lies on a separate process in the above implementation. The driver process can then call the distributed compute function within the ``actor_rollout_worker_group`` and other roles to construct the RL training loop. For models colocated in the same set of GPUs, we further provide a fine-grain optimization, which merge the ``worker_group`` of different roles in the same process. This optimization can save the redundant CUDA/distributed context in different processes. .. code:: python # initialize WorkerGroup # NOTE: if you want to use a different resource pool for each role, which can support different parallel size, # you should not use `create_colocated_worker_cls`. Instead, directly pass different resource pool to different worker groups. # See TODO(url) for more information. all_wg = {} for resource_pool, class_dict in self.resource_pool_to_cls.items(): worker_dict_cls = create_colocated_worker_cls(class_dict=class_dict) wg_dict = self.ray_worker_group_cls(resource_pool=resource_pool, ray_cls_with_init=worker_dict_cls) spawn_wg = wg_dict.spawn(prefix_set=class_dict.keys()) all_wg.update(spawn_wg) if self.use_critic: self.critic_wg = all_wg['critic'] self.critic_wg.init_model() if self.use_reference_policy: self.ref_policy_wg = all_wg['ref'] self.ref_policy_wg.init_model() if self.use_rm: self.rm_wg = all_wg['rm'] self.rm_wg.init_model() # we should create rollout at the end so that vllm can have a better estimation of kv cache memory self.actor_rollout_wg = all_wg['actor_rollout'] self.actor_rollout_wg.init_model() .. note:: For megatron backend, if we merge the ``worker_groups`` into the same processes, all the roles will utilize the same 3D parallel size. To optimize this, we may need to maintain several 3D process groups for each role in the same distributed context. If you want to use different 3D parallel size for different roles, please follow the similar architecture of the first code block to initialize each role's ``worker_group`` PPO Training Loop ----------------- We implement the PPO training loop by calling the functions in worker_group of each role. The input and output data of each function is a ``DataProto`` object implemented in `protocol.py <https://github.com/volcengine/verl/blob/main/verl/protocol.py>`_. In the training loop, trainer will dispatch/collect the data to/from different GPUs following the transfer protocols wrapped in the workers' functions. The computation of PPO micro batches is processed in ``update_actor`` and ``update_critic`` functions. To extend to other RLHF algorithms, such as DPO, GRPO, please refer to :doc:`../advance/dpo_extension`. .. code:: python def fit(self): """ The training loop of PPO. The driver process only need to call the compute functions of the worker group through RPC to construct the PPO dataflow. The light-weight advantage computation is done on the driver process. """ from verl.utils.tracking import Tracking from omegaconf import OmegaConf logger = Tracking(project_name=self.config.trainer.project_name, experiment_name=self.config.trainer.experiment_name, default_backend=self.config.trainer.logger, config=OmegaConf.to_container(self.config, resolve=True)) global_steps = 0 # perform validation before training # currently, we only support validation using the reward_function. if self.val_reward_fn is not None: val_metrics = self._validate() pprint(f'Initial validation metrics: {val_metrics}') for epoch in range(self.config.trainer.total_epochs): for batch_dict in self.train_dataloader: metrics = {} batch: DataProto = DataProto.from_single_dict(batch_dict) # batch = batch.to('cuda') # pop those keys for generation gen_batch = batch.pop(batch_keys=['input_ids', 'attention_mask', 'position_ids']) # generate a batch with Timer(name='gen', logger=None) as timer: gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch) metrics['timing/gen'] = timer.last batch = batch.union(gen_batch_output) if self.use_reference_policy: # compute reference log_prob with Timer(name='ref', logger=None) as timer: ref_log_prob = self.ref_policy_wg.compute_ref_log_prob(batch) batch = batch.union(ref_log_prob) metrics['timing/ref'] = timer.last # compute values with Timer(name='values', logger=None) as timer: values = self.critic_wg.compute_values(batch) batch = batch.union(values) metrics['timing/values'] = timer.last with Timer(name='adv', logger=None) as timer: # compute scores. Support both model and function-based. # We first compute the scores using reward model. Then, we call reward_fn to combine # the results from reward model and rule-based results. if self.use_rm: # we first compute reward model score reward_tensor = self.rm_wg.compute_rm_score(batch) batch = batch.union(reward_tensor) # we combine with rule-based rm reward_tensor = self.reward_fn(batch) batch.batch['token_level_scores'] = reward_tensor # compute rewards. apply_kl_penalty if available batch, kl_metrics = apply_kl_penalty(batch, kl_ctrl=self.kl_ctrl, kl_penalty=self.config.algorithm.kl_penalty) metrics.update(kl_metrics) # compute advantages, executed on the driver process batch = compute_advantage(batch, self.config.algorithm.gamma, self.config.algorithm.lam, adv_estimator=self.config.algorithm.adv_estimator) metrics['timing/adv'] = timer.last # update critic if self.use_critic: with Timer(name='update_critic', logger=None) as timer: critic_output = self.critic_wg.update_critic(batch) metrics['timing/update_critic'] = timer.last critic_output_metrics = reduce_metrics(critic_output.meta_info['metrics']) metrics.update(critic_output_metrics) # implement critic warmup if self.config.trainer.critic_warmup <= global_steps: # update actor with Timer(name='update_actor', logger=None) as timer: actor_output = self.actor_rollout_wg.update_actor(batch) metrics['timing/update_actor'] = timer.last actor_output_metrics = reduce_metrics(actor_output.meta_info['metrics']) metrics.update(actor_output_metrics) # validate if self.val_reward_fn is not None and (global_steps + 1) % self.config.trainer.test_freq == 0: with Timer(name='testing', logger=None) as timer: val_metrics: dict = self._validate() val_metrics = {f'val/{key}': val for key, val in val_metrics.items()} metrics['timing/testing'] = timer.last metrics.update(val_metrics) # collect metrics data_metrics = compute_data_metrics(batch=batch) metrics.update(data_metrics) # TODO: make a canonical logger that supports various backend logger.log(data=metrics, step=global_steps) if self.config.trainer.save_freq > 0 and (global_steps + 1) % self.config.trainer.save_freq == 0: actor_local_path = os.path.join(self.config.trainer.default_local_dir, 'actor', f'global_step_{global_steps}') actor_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'actor') self.actor_rollout_wg.save_checkpoint(actor_local_path, actor_remote_path) if self.use_critic: critic_local_path = os.path.join(self.config.trainer.default_local_dir, 'critic', f'global_step_{global_steps}') critic_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'critic') self.critic_wg.save_checkpoint(critic_local_path, critic_remote_path) global_steps += 1 # perform validation after training if self.val_reward_fn is not None: val_metrics = self._validate() pprint(f'Final validation metrics: {val_metrics}')
{ "source": "Jiayi-Pan/TinyZero", "title": "docs/workers/ray_trainer.rst", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/docs/workers/ray_trainer.rst", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 12036 }
# Split Placement Example Here we introduce how to run the naive implementation of the split placement of PPO algorithm. We will release the complete version of flexible placement in the near future. For quickstart, you can only follow Step 2 to modify the code and then follow Step 4 to execute the split placement example. ### Step 1: Placing the models to different GPUs Specify the placement and resource allocation. In the example, we place the actor and reference in the first half of the GPUs while map the critic and reward model (if any) to the second half of the GPUs. ```python actor_rollout_ref_pool_id = 'actor_rollout_ref_pool' critic_pool_id = 'critic_pool' if config.trainer.nnodes // 2 == 0 and config.trainer.n_gpus_per_node // 2 > 0: resource_pool_spec = { actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes, critic_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes, } else: resource_pool_spec = { actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2), critic_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2), } print(f'resource_pool_spec: {resource_pool_spec}') mapping = { Role.ActorRollout: actor_rollout_ref_pool_id, Role.Critic: critic_pool_id, Role.RefPolicy: actor_rollout_ref_pool_id, } mapping[Role.RewardModel] = critic_pool_id ``` ### Step 2: Make the models executed asynchronously Based on the model placement, we need to make the models executed asynchronously. To do so, you need to turn off the `blocking` flag (i.e., `blocking=False`) in our decorator of some model operations. For example, we hope the actor update and critic update can be executed in parallel, then we need to make the following modification in `fsdp_workers.py` ``` @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False) def update_actor(self, data: DataProto): ... @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False) def update_critic(self, data: DataProto): ... ``` We can also parallelize the computation of `ref_log_prob` and `values` and `rewards` in the split placement. For simplicity of the tutorial, we ### Step 3: Execute these operation in parallel in the single controller process To implement the parallel execution of the actor and critic update, the only thing we need to modify in the `ray_trainer.py` is to `get` the concurrent `futures` on the single controller process. ```python critic_output = critic_output.get() actor_output = actor_output.get() ``` ### Step 4: Run the split placement example ``` bash run_deepseek7b_llm.sh ```
{ "source": "Jiayi-Pan/TinyZero", "title": "examples/split_placement/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/examples/split_placement/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 2686 }
# Models Common modelzoo such as huggingface/transformers stuggles when using Pytorch native model parallelism. Following the design principle of vLLM, we keep a simple, parallelizable, highly-optimized with packed inputs in verl. ## Adding a New Huggingface Model ### Step 1: Copy the model file from HF to verl - Add a new file under verl/models/hf - Copy ONLY the model file from huggingface/transformers/models to verl/models/hf ### Step 2: Modify the model file to use packed inputs - Remove all the code related to inference (kv cache) - Modify the inputs to include only - input_ids (total_nnz,) - cu_seqlens (total_nnz + 1,) - max_seqlen_in_batch: int - Note that this requires using flash attention with causal mask. ### Step 2.5: Add tests - Add a test to compare this version and the huggingface version - Following the infrastructure and add tests to tests/models/hf ### Step 3: Add a function to apply tensor parallelism - Please follow - https://pytorch.org/docs/stable/distributed.tensor.parallel.html - https://pytorch.org/tutorials/intermediate/TP_tutorial.html - General comments - Tensor Parallelism in native Pytorch is NOT auto-parallelism. The way it works is to specify how model parameters and input/output reshards using configs. These configs are then registered as hooks to perform input/output resharding before/after model forward. ### Step 4: Add a function to apply data parallelism - Please use FSDP2 APIs - See demo here https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L413 ### Step 5: Add a function to apply pipeline parallelism - Comes in Pytorch 2.4 - Currently only in alpha in nightly version - Check torchtitan for more details
{ "source": "Jiayi-Pan/TinyZero", "title": "verl/models/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/verl/models/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1742 }
# Detached Worker ## How to run (Only on a single node) - Start a local ray cluster: ```bash ray start --head --port=6379 ``` - Run the server ```bash python3 server.py ``` - On another terminal, Run the client ```bash python3 client.py ```
{ "source": "Jiayi-Pan/TinyZero", "title": "tests/ray/detached_worker/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/tests/ray/detached_worker/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 241 }
# Dataset Format ## RLHF dataset We combine all the data sources into a single parquet files. We directly organize the prompt into the chat format so that multi-turn chats can be easily incorporated. In the prompt, we may add instruction following texts to guide the model output the answers in a particular format so that we can extract the answers. Math problems ```json { "data_source": "openai/gsm8k", "prompt": [{"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? Let's think step by step and output the final answer after \"####\""}], "ability": "math", "reward_model": { "style": "rule", "ground_truth": ["72"] }, } ```
{ "source": "Jiayi-Pan/TinyZero", "title": "verl/utils/dataset/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/verl/utils/dataset/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 796 }
# Digit completion This is an example of solving a digit completion problem. The problem is defined as below: The prompt is a sequence of numbers with fixed difference. The agent's goal is to complete the next N numbers. If the max number is reached, the next number should be modulo with max number. For example, - prompt = [1, 2, 3] - N = 5 - max_number = 6 The response should be [4, 5, 6, 7%6, 8%6] = [4, 5, 6, 0, 1]. # Environment definition The core definition of the task is defined in verl/envs/digit_completion/task.py It is highly recommended to take a look at it for better understanding. # Run experiments The users are required to specify the config path and config name (and the relative model config path to the current working directory) ```bash # cd examples/arithmetic_sequence/rl # Specify the config path and config name (current working dir) python3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron' # The default relative path of model config is 'config/model_config', if you want to change it, you can rewrite it in ray_megatron.yaml or using: python3 -m verl.trainer.ppo.ray_megatron_train_synchronous --config-path=$(pwd)/config --config-name='ray_megatron' ++model.base_path=config/model_config ```
{ "source": "Jiayi-Pan/TinyZero", "title": "tests/e2e/arithmetic_sequence/rl/README.md", "url": "https://github.com/Jiayi-Pan/TinyZero/blob/main/tests/e2e/arithmetic_sequence/rl/README.md", "date": "2025-01-21T16:49:12", "stars": 10677, "description": "Clean, minimal, accessible reproduction of DeepSeek R1-Zero", "file_size": 1297 }
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
{ "source": "modelcontextprotocol/servers", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/CODE_OF_CONDUCT.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 5222 }
# Contributing to MCP Servers Thank you for your interest in contributing to the Model Context Protocol (MCP) servers! This document provides guidelines and instructions for contributing. ## Types of Contributions ### 1. New Servers The repository contains reference implementations, as well as a list of community servers. We generally don't accept new servers into the repository. We do accept pull requests to the [README.md](./README.md) adding a reference to your servers. Please keep lists in alphabetical order to minimize merge conflicts when adding new items. - Check the [modelcontextprotocol.io](https://modelcontextprotocol.io) documentation - Ensure your server doesn't duplicate existing functionality - Consider whether your server would be generally useful to others - Follow [security best practices](https://modelcontextprotocol.io/docs/concepts/transports#security-considerations) from the MCP documentation - Create a PR adding a link to your server to the [README.md](./README.md). ### 2. Improvements to Existing Servers Enhancements to existing servers are welcome! This includes: - Bug fixes - Performance improvements - New features - Security enhancements ### 3. Documentation Documentation improvements are always welcome: - Fixing typos or unclear instructions - Adding examples - Improving setup instructions - Adding troubleshooting guides ## Getting Started 1. Fork the repository 2. Clone your fork: ```bash git clone https://github.com/your-username/servers.git ``` 3. Add the upstream remote: ```bash git remote add upstream https://github.com/modelcontextprotocol/servers.git ``` 4. Create a branch: ```bash git checkout -b my-feature ``` ## Development Guidelines ### Code Style - Follow the existing code style in the repository - Include appropriate type definitions - Add comments for complex logic ### Documentation - Include a detailed README.md in your server directory - Document all configuration options - Provide setup instructions - Include usage examples ### Security - Follow security best practices - Implement proper input validation - Handle errors appropriately - Document security considerations ## Submitting Changes 1. Commit your changes: ```bash git add . git commit -m "Description of changes" ``` 2. Push to your fork: ```bash git push origin my-feature ``` 3. Create a Pull Request through GitHub ### Pull Request Guidelines - Thoroughly test your changes - Fill out the pull request template completely - Link any related issues - Provide clear description of changes - Include any necessary documentation updates - Add screenshots for UI changes - List any breaking changes ## Community - Participate in [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) - Follow the [Code of Conduct](CODE_OF_CONDUCT.md) ## Questions? - Check the [documentation](https://modelcontextprotocol.io) - Ask in GitHub Discussions Thank you for contributing to MCP Servers!
{ "source": "modelcontextprotocol/servers", "title": "CONTRIBUTING.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/CONTRIBUTING.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 3015 }
# Model Context Protocol servers This repository is a collection of *reference implementations* for the [Model Context Protocol](https://modelcontextprotocol.io/) (MCP), as well as references to community built servers and additional resources. The servers in this repository showcase the versatility and extensibility of MCP, demonstrating how it can be used to give Large Language Models (LLMs) secure, controlled access to tools and data sources. Each MCP server is implemented with either the [Typescript MCP SDK](https://github.com/modelcontextprotocol/typescript-sdk) or [Python MCP SDK](https://github.com/modelcontextprotocol/python-sdk). > Note: Lists in this README are maintained in alphabetical order to minimize merge conflicts when adding new items. ## 🌟 Reference Servers These servers aim to demonstrate MCP features and the TypeScript and Python SDKs. - **[AWS KB Retrieval](src/aws-kb-retrieval-server)** - Retrieval from AWS Knowledge Base using Bedrock Agent Runtime - **[Brave Search](src/brave-search)** - Web and local search using Brave's Search API - **[EverArt](src/everart)** - AI image generation using various models - **[Everything](src/everything)** - Reference / test server with prompts, resources, and tools - **[Fetch](src/fetch)** - Web content fetching and conversion for efficient LLM usage - **[Filesystem](src/filesystem)** - Secure file operations with configurable access controls - **[Git](src/git)** - Tools to read, search, and manipulate Git repositories - **[GitHub](src/github)** - Repository management, file operations, and GitHub API integration - **[GitLab](src/gitlab)** - GitLab API, enabling project management - **[Google Drive](src/gdrive)** - File access and search capabilities for Google Drive - **[Google Maps](src/google-maps)** - Location services, directions, and place details - **[Memory](src/memory)** - Knowledge graph-based persistent memory system - **[PostgreSQL](src/postgres)** - Read-only database access with schema inspection - **[Puppeteer](src/puppeteer)** - Browser automation and web scraping - **[Sentry](src/sentry)** - Retrieving and analyzing issues from Sentry.io - **[Sequential Thinking](src/sequentialthinking)** - Dynamic and reflective problem-solving through thought sequences - **[Slack](src/slack)** - Channel management and messaging capabilities - **[Sqlite](src/sqlite)** - Database interaction and business intelligence capabilities - **[Time](src/time)** - Time and timezone conversion capabilities ## 🤝 Third-Party Servers ### 🎖️ Official Integrations Official integrations are maintained by companies building production ready MCP servers for their platforms. - <img height="12" width="12" src="https://www.21st.dev/favicon.ico" alt="21st.dev Logo" /> **[21st.dev Magic](https://github.com/21st-dev/magic-mcp)** - Create crafted UI components inspired by the best 21st.dev design engineers. - <img height="12" width="12" src="https://apify.com/favicon.ico" alt="Apify Logo" /> **[Apify](https://github.com/apify/actors-mcp-server)** - [Actors MCP Server](https://apify.com/apify/actors-mcp-server): Use 3,000+ pre-built cloud tools to extract data from websites, e-commerce, social media, search engines, maps, and more - <img height="12" width="12" src="https://axiom.co/favicon.ico" alt="Axiom Logo" /> **[Axiom](https://github.com/axiomhq/mcp-server-axiom)** - Query and analyze your Axiom logs, traces, and all other event data in natural language - <img height="12" width="12" src="https://browserbase.com/favicon.ico" alt="Browserbase Logo" /> **[Browserbase](https://github.com/browserbase/mcp-server-browserbase)** - Automate browser interactions in the cloud (e.g. web navigation, data extraction, form filling, and more) - <img height="12" width="12" src="https://cdn.simpleicons.org/cloudflare" /> **[Cloudflare](https://github.com/cloudflare/mcp-server-cloudflare)** - Deploy, configure & interrogate your resources on the Cloudflare developer platform (e.g. Workers/KV/R2/D1) - <img height="12" width="12" src="https://e2b.dev/favicon.ico" alt="E2B Logo" /> **[E2B](https://github.com/e2b-dev/mcp-server)** - Run code in secure sandboxes hosted by [E2B](https://e2b.dev) - <img height="12" width="12" src="https://esignatures.com/favicon.ico" alt="eSignatures Logo" /> **[eSignatures](https://github.com/esignaturescom/mcp-server-esignatures)** - Contract and template management for drafting, reviewing, and sending binding contracts. - <img height="12" width="12" src="https://exa.ai/images/favicon-32x32.png" alt="Exa Logo" /> **[Exa](https://github.com/exa-labs/exa-mcp-server)** - Search Engine made for AIs by [Exa](https://exa.ai) - <img height="12" width="12" src="https://firecrawl.dev/favicon.ico" alt="Firecrawl Logo" /> **[Firecrawl](https://github.com/mendableai/firecrawl-mcp-server)** - Extract web data with [Firecrawl](https://firecrawl.dev) - <img height="12" width="12" src="https://fireproof.storage/favicon.ico" alt="Fireproof Logo" /> **[Fireproof](https://github.com/fireproof-storage/mcp-database-server)** - Immutable ledger database with live synchronization - <img height="12" width="12" src="https://grafana.com/favicon.ico" alt="Grafana Logo" /> **[Grafana](https://github.com/grafana/mcp-grafana)** - Search dashboards, investigate incidents and query datasources in your Grafana instance - **[IBM wxflows](https://github.com/IBM/wxflows/tree/main/examples/mcp/javascript)** - Tool platform by IBM to build, test and deploy tools for any data source - <img height="12" width="12" src="https://integration.app/favicon.ico" alt="Integration App Icon" /> **[Integration App](https://github.com/integration-app/mcp-server)** - Interact with any other SaaS applications on behalf of your customers. - <img height="12" width="12" src="https://cdn.simpleicons.org/jetbrains" /> **[JetBrains](https://github.com/JetBrains/mcp-jetbrains)** – Work on your code with JetBrains IDEs - <img height="12" width="12" src="https://kagi.com/favicon.ico" alt="Kagi Logo" /> **[Kagi Search](https://github.com/kagisearch/kagimcp)** - Search the web using Kagi's search API - <img height="12" width="12" src="https://lingo.dev/favicon.ico" alt="Lingo.dev Logo" /> **[Lingo.dev](https://github.com/lingodotdev/lingo.dev/blob/main/mcp.md)** - Make your AI agent speak every language on the planet, using [Lingo.dev](https://lingo.dev) Localization Engine. - <img height="12" width="12" src="https://www.meilisearch.com/favicon.ico" alt="Meilisearch Logo" /> **[Meilisearch](https://github.com/meilisearch/meilisearch-mcp)** - Interact & query with Meilisearch (Full-text & semantic search API) - <img height="12" width="12" src="https://metoro.io/static/images/logos/Metoro.svg" /> **[Metoro](https://github.com/metoro-io/metoro-mcp-server)** - Query and interact with kubernetes environments monitored by Metoro - <img height="12" width="12" src="https://www.motherduck.com/favicon.ico" alt="MotherDuck Logo" /> **[MotherDuck](https://github.com/motherduckdb/mcp-server-motherduck)** - Query and analyze data with MotherDuck and local DuckDB - <img height="12" width="12" src="https://needle-ai.com/images/needle-logo-orange-2-rounded.png" alt="Needle AI Logo" /> **[Needle](https://github.com/needle-ai/needle-mcp)** - Production-ready RAG out of the box to search and retrieve data from your own documents. - <img height="12" width="12" src="https://neo4j.com/favicon.ico" alt="Neo4j Logo" /> **[Neo4j](https://github.com/neo4j-contrib/mcp-neo4j/)** - Neo4j graph database server (schema + read/write-cypher) and separate graph database backed memory - **[Neon](https://github.com/neondatabase/mcp-server-neon)** - Interact with the Neon serverless Postgres platform - <img height="12" width="12" src="https://oxylabs.io/favicon.ico" alt="Oxylabs Logo" /> **[Oxylabs](https://github.com/oxylabs/oxylabs-mcp)** - Scrape websites with Oxylabs Web API, supporting dynamic rendering and parsing for structured data extraction. - <img height="12" width="12" src="https://qdrant.tech/img/brand-resources-logos/logomark.svg" /> **[Qdrant](https://github.com/qdrant/mcp-server-qdrant/)** - Implement semantic memory layer on top of the Qdrant vector search engine - **[Raygun](https://github.com/MindscapeHQ/mcp-server-raygun)** - Interact with your crash reporting and real using monitoring data on your Raygun account - <img height="12" width="12" src="https://riza.io/favicon.ico" alt="Riza logo" /> **[Riza](https://github.com/riza-io/riza-mcp)** - Arbitrary code execution and tool-use platform for LLMs by [Riza](https://riza.io) - <img height="12" width="12" src="https://pics.fatwang2.com/56912e614b35093426c515860f9f2234.svg" /> [Search1API](https://github.com/fatwang2/search1api-mcp) - One API for Search, Crawling, and Sitemaps - <img height="12" width="12" src="https://stripe.com/favicon.ico" alt="Stripe Logo" /> **[Stripe](https://github.com/stripe/agent-toolkit)** - Interact with Stripe API - <img height="12" width="12" src="https://tavily.com/favicon.ico" alt="Tavily Logo" /> **[Tavily](https://github.com/tavily-ai/tavily-mcp)** - Search engine for AI agents (search + extract) powered by [Tavily](https://tavily.com/) - <img height="12" width="12" src="https://www.tinybird.co/favicon.ico" alt="Tinybird Logo" /> **[Tinybird](https://github.com/tinybirdco/mcp-tinybird)** - Interact with Tinybird serverless ClickHouse platform - <img height="12" width="12" src="https://verodat.io/assets/favicon-16x16.png" alt="Verodat Logo" /> **[Verodat](https://github.com/Verodat/verodat-mcp-server)** - Interact with Verodat AI Ready Data platform ### 🌎 Community Servers A growing set of community-developed and maintained servers demonstrates various applications of MCP across different domains. > **Note:** Community servers are **untested** and should be used at **your own risk**. They are not affiliated with or endorsed by Anthropic. - **[AWS S3](https://github.com/aws-samples/sample-mcp-server-s3)** - A sample MCP server for AWS S3 that flexibly fetches objects from S3 such as PDF documents - **[AWS](https://github.com/rishikavikondala/mcp-server-aws)** - Perform operations on your AWS resources using an LLM - **[Airtable](https://github.com/domdomegg/airtable-mcp-server)** - Read and write access to [Airtable](https://airtable.com/) databases, with schema inspection. - **[Airtable](https://github.com/felores/airtable-mcp)** - Airtable Model Context Protocol Server. - **[AlphaVantage](https://github.com/calvernaz/alphavantage)** - MCP server for stock market data API [AlphaVantage](https://www.alphavantage.co) - **[Anki](https://github.com/scorzeth/anki-mcp-server)** - An MCP server for interacting with your [Anki](https://apps.ankiweb.net) decks and cards. - **[Any Chat Completions](https://github.com/pyroprompts/any-chat-completions-mcp)** - Interact with any OpenAI SDK Compatible Chat Completions API like OpenAI, Perplexity, Groq, xAI and many more. - **[ArangoDB](https://github.com/ravenwits/mcp-server-arangodb)** - MCP Server that provides database interaction capabilities through [ArangoDB](https://arangodb.com/). - **[Atlassian](https://github.com/sooperset/mcp-atlassian)** - Interact with Atlassian Cloud products (Confluence and Jira) including searching/reading Confluence spaces/pages, accessing Jira issues, and project metadata. - **[Base Free USDC Transfer](https://github.com/magnetai/mcp-free-usdc-transfer)** - Send USDC on [Base](https://base.org) for free using Claude AI! Built with [Coinbase CDP](https://docs.cdp.coinbase.com/mpc-wallet/docs/welcome). - **[BigQuery](https://github.com/LucasHild/mcp-server-bigquery)** (by LucasHild) - This server enables LLMs to inspect database schemas and execute queries on BigQuery. - **[BigQuery](https://github.com/ergut/mcp-bigquery-server)** (by ergut) - Server implementation for Google BigQuery integration that enables direct BigQuery database access and querying capabilities - **[Calendar](https://github.com/GongRzhe/Calendar-MCP-Server)** - Google Calendar integration server enabling AI assistants to manage calendar events through natural language interactions. - **[CFBD API](https://github.com/lenwood/cfbd-mcp-server)** - An MCP server for the [College Football Data API](https://collegefootballdata.com/). - **[ChatMCP](https://github.com/AI-QL/chat-mcp)** – An Open Source Cross-platform GUI Desktop application compatible with Linux, macOS, and Windows, enabling seamless interaction with MCP servers across dynamically selectable LLMs, by **[AIQL](https://github.com/AI-QL)** - **[ChatSum](https://github.com/mcpso/mcp-server-chatsum)** - Query and Summarize chat messages with LLM. by [mcpso](https://mcp.so) - **[Chroma](https://github.com/privetin/chroma)** - Vector database server for semantic document search and metadata filtering, built on Chroma - **[ClaudePost](https://github.com/ZilongXue/claude-post)** - ClaudePost enables seamless email management for Gmail, offering secure features like email search, reading, and sending. - **[Cloudinary](https://github.com/felores/cloudinary-mcp-server)** - Cloudinary Model Context Protocol Server to upload media to Cloudinary and get back the media link and details. - **[code-executor](https://github.com/bazinga012/mcp_code_executor)** - An MCP server that allows LLMs to execute Python code within a specified Conda environment. - **[code-sandbox-mcp](https://github.com/Automata-Labs-team/code-sandbox-mcp)** - An MCP server to create secure code sandbox environment for executing code within Docker containers. - **[cognee-mcp](https://github.com/topoteretes/cognee/tree/main/cognee-mcp)** - GraphRAG memory server with customizable ingestion, data processing and search - **[coin_api_mcp](https://github.com/longmans/coin_api_mcp)** - Provides access to [coinmarketcap](https://coinmarketcap.com/) cryptocurrency data. - **[Contentful-mcp](https://github.com/ivo-toby/contentful-mcp)** - Read, update, delete, publish content in your [Contentful](https://contentful.com) space(s) from this MCP Server. - **[Data Exploration](https://github.com/reading-plus-ai/mcp-server-data-exploration)** - MCP server for autonomous data exploration on .csv-based datasets, providing intelligent insights with minimal effort. NOTE: Will execute arbitrary Python code on your machine, please use with caution! - **[Dataset Viewer](https://github.com/privetin/dataset-viewer)** - Browse and analyze Hugging Face datasets with features like search, filtering, statistics, and data export - **[DeepSeek MCP Server](https://github.com/DMontgomery40/deepseek-mcp-server)** - Model Context Protocol server integrating DeepSeek's advanced language models, in addition to [other useful API endpoints](https://github.com/DMontgomery40/deepseek-mcp-server?tab=readme-ov-file#features) - **[Deepseek_R1](https://github.com/66julienmartin/MCP-server-Deepseek_R1)** - A Model Context Protocol (MCP) server implementation connecting Claude Desktop with DeepSeek's language models (R1/V3) - **[Descope](https://github.com/descope-sample-apps/descope-mcp-server)** - An MCP server to integrate with [Descope](https://descope.com) to search audit logs, manage users, and more. - **[DevRev](https://github.com/kpsunil97/devrev-mcp-server)** - An MCP server to integrate with DevRev APIs to search through your DevRev Knowledge Graph where objects can be imported from diff. sources listed [here](https://devrev.ai/docs/import#available-sources). - **[Dify](https://github.com/YanxingLiu/dify-mcp-server)** - A simple implementation of an MCP server for dify workflows. - **[Discord](https://github.com/v-3/discordmcp)** - A MCP server to connect to Discord guilds through a bot and read and write messages in channels - **[Docker](https://github.com/ckreiling/mcp-server-docker)** - Integrate with Docker to manage containers, images, volumes, and networks. - **[Drupal](https://github.com/Omedia/mcp-server-drupal)** - Server for interacting with [Drupal](https://www.drupal.org/project/mcp) using STDIO transport layer. - **[Elasticsearch](https://github.com/cr7258/elasticsearch-mcp-server)** - MCP server implementation that provides Elasticsearch interaction. - **[ElevenLabs](https://github.com/mamertofabian/elevenlabs-mcp-server)** - A server that integrates with ElevenLabs text-to-speech API capable of generating full voiceovers with multiple voices. - **[Eunomia](https://github.com/whataboutyou-ai/eunomia-MCP-server)** - Extension of the Eunomia framework that connects Eunomia instruments with MCP servers - **[Everything Search](https://github.com/mamertofabian/mcp-everything-search)** - Fast file searching capabilities across Windows (using [Everything SDK](https://www.voidtools.com/support/everything/sdk/)), macOS (using mdfind command), and Linux (using locate/plocate command). - **[Fetch](https://github.com/zcaceres/fetch-mcp)** - A server that flexibly fetches HTML, JSON, Markdown, or plaintext. - **[FireCrawl](https://github.com/vrknetha/mcp-server-firecrawl)** - Advanced web scraping with JavaScript rendering, PDF support, and smart rate limiting - **[FlightRadar24](https://github.com/sunsetcoder/flightradar24-mcp-server)** - A Claude Desktop MCP server that helps you track flights in real-time using Flightradar24 data. - **[Glean](https://github.com/longyi1207/glean-mcp-server)** - A server that uses Glean API to search and chat. - **[Gmail](https://github.com/GongRzhe/Gmail-MCP-Server)** - A Model Context Protocol (MCP) server for Gmail integration in Claude Desktop with auto authentication support. - **[Goal Story](https://github.com/hichana/goalstory-mcp)** - a Goal Tracker and Visualization Tool for personal and professional development. - **[Golang Filesystem Server](https://github.com/mark3labs/mcp-filesystem-server)** - Secure file operations with configurable access controls built with Go! - **[Google Calendar](https://github.com/v-3/google-calendar)** - Integration with Google Calendar to check schedules, find time, and add/delete events - **[Google Calendar](https://github.com/nspady/google-calendar-mcp)** - Google Calendar MCP Server for managing Google calendar events. Also supports searching for events by attributes like title and location. - **[Google Custom Search](https://github.com/adenot/mcp-google-search)** - Provides Google Search results via the Google Custom Search API - **[Google Tasks](https://github.com/zcaceres/gtasks-mcp)** - Google Tasks API Model Context Protocol Server. - **[Holaspirit](https://github.com/syucream/holaspirit-mcp-server)** - Interact with [Holaspirit](https://www.holaspirit.com/). - **[Home Assistant](https://github.com/tevonsb/homeassistant-mcp)** - Interact with [Home Assistant](https://www.home-assistant.io/) including viewing and controlling lights, switches, sensors, and all other Home Assistant entities. - **[HubSpot](https://github.com/buryhuang/mcp-hubspot)** - HubSpot CRM integration for managing contacts and companies. Create and retrieve CRM data directly through Claude chat. - **[HuggingFace Spaces](https://github.com/evalstate/mcp-hfspace)** - Server for using HuggingFace Spaces, supporting Open Source Image, Audio, Text Models and more. Claude Desktop mode for easy integration. - **[Inoyu](https://github.com/sergehuber/inoyu-mcp-unomi-server)** - Interact with an Apache Unomi CDP customer data platform to retrieve and update customer profiles - **[iTerm MCP](https://github.com/ferrislucas/iterm-mcp)** - Integration with iTerm2 terminal emulator for macOS, enabling LLMs to execute and monitor terminal commands. - **[JavaFX](https://github.com/mcpso/mcp-server-javafx)** - Make drawings using a JavaFX canvas - **[JDBC](https://github.com/quarkiverse/quarkus-mcp-servers/tree/main/jdbc)** - Connect to any JDBC-compatible database and query, insert, update, delete, and more. Supports MySQL, PostgreSQL, Oracle, SQL Server, sqllite and [more](https://github.com/quarkiverse/quarkus-mcp-servers/tree/main/jdbc#supported-jdbc-variants). - **[JSON](https://github.com/GongRzhe/JSON-MCP-Server)** - JSON handling and processing server with advanced query capabilities using JSONPath syntax and support for array, string, numeric, and date operations. - **[Keycloak MCP](https://github.com/ChristophEnglisch/keycloak-model-context-protocol)** - This MCP server enables natural language interaction with Keycloak for user and realm management including creating, deleting, and listing users and realms. - **[Kibela](https://github.com/kiwamizamurai/mcp-kibela-server)** (by kiwamizamurai) - Interact with Kibela API. - **[kintone](https://github.com/macrat/mcp-server-kintone)** - Manage records and apps in [kintone](https://kintone.com) through LLM tools. - **[Kubernetes](https://github.com/Flux159/mcp-server-kubernetes)** - Connect to Kubernetes cluster and manage pods, deployments, and services. - **[Lightdash](https://github.com/syucream/lightdash-mcp-server)** - Interact with [Lightdash](https://www.lightdash.com/), a BI tool. - **[Linear](https://github.com/jerhadf/linear-mcp-server)** - Allows LLM to interact with Linear's API for project management, including searching, creating, and updating issues. - **[LlamaCloud](https://github.com/run-llama/mcp-server-llamacloud)** (by marcusschiesser) - Integrate the data stored in a managed index on [LlamaCloud](https://cloud.llamaindex.ai/) - **[llm-context](https://github.com/cyberchitta/llm-context.py)** - Provides a repo-packing MCP tool with configurable profiles that specify file inclusion/exclusion patterns and optional prompts. - **[MCP Compass](https://github.com/liuyoshio/mcp-compass)** - Suggest the right MCP server for your needs - **[MCP Installer](https://github.com/anaisbetts/mcp-installer)** - This server is a server that installs other MCP servers for you. - **[mcp-k8s-go](https://github.com/strowk/mcp-k8s-go)** - Golang-based Kubernetes server for MCP to browse pods and their logs, events, namespaces and more. Built to be extensible. - **[mcp-proxy](https://github.com/sparfenyuk/mcp-proxy)** - Connect to MCP servers that run on SSE transport, or expose stdio servers as an SSE server. - **[MSSQL](https://github.com/aekanun2020/mcp-server/)** - MSSQL database integration with configurable access controls and schema inspection - **[MSSQL](https://github.com/JexinSam/mssql_mcp_server)** (by jexin) - MCP Server for MSSQL database in Python - **[MSSQL-Python](https://github.com/amornpan/py-mcp-mssql)** (by amornpan) - A read-only Python implementation for MSSQL database access with enhanced security features, configurable access controls, and schema inspection capabilities. Focuses on safe database interaction through Python ecosystem. - **[Markdownify](https://github.com/zcaceres/mcp-markdownify-server)** - MCP to convert almost anything to Markdown (PPTX, HTML, PDF, Youtube Transcripts and more) - **[Minima](https://github.com/dmayboroda/minima)** - MCP server for RAG on local files - **[MongoDB](https://github.com/kiliczsh/mcp-mongo-server)** - A Model Context Protocol Server for MongoDB. - **[Monday.com](https://github.com/sakce/mcp-server-monday)** - MCP Server to interact with Monday.com boards and items. - **[MySQL](https://github.com/benborla/mcp-server-mysql)** (by benborla) - MySQL database integration in NodeJS with configurable access controls and schema inspection - **[MySQL](https://github.com/designcomputer/mysql_mcp_server)** (by DesignComputer) - MySQL database integration in Python with configurable access controls and schema inspection - **[NS Travel Information](https://github.com/r-huijts/ns-mcp-server)** - Access Dutch Railways (NS) real-time train travel information and disruptions through the official NS API. - **[Neo4j](https://github.com/da-okazaki/mcp-neo4j-server)** - A community built server that interacts with Neo4j Graph Database. - **[Neovim](https://github.com/bigcodegen/mcp-neovim-server)** - An MCP Server for your Neovim session. - **[Notion](https://github.com/suekou/mcp-notion-server)** (by suekou) - Interact with Notion API. - **[Notion](https://github.com/v-3/notion-server)** (by v-3) - Notion MCP integration. Search, Read, Update, and Create pages through Claude chat. - **[oatpp-mcp](https://github.com/oatpp/oatpp-mcp)** - C++ MCP integration for Oat++. Use [Oat++](https://oatpp.io) to build MCP servers. - **[Obsidian Markdown Notes](https://github.com/calclavia/mcp-obsidian)** - Read and search through your Obsidian vault or any directory containing Markdown notes - **[obsidian-mcp](https://github.com/StevenStavrakis/obsidian-mcp)** - (by Steven Stavrakis) An MCP server for Obsidian.md with tools for searching, reading, writing, and organizing notes. - **[OpenAPI](https://github.com/snaggle-ai/openapi-mcp-server)** - Interact with [OpenAPI](https://www.openapis.org/) APIs. - **[OpenCTI](https://github.com/Spathodea-Network/opencti-mcp)** - Interact with OpenCTI platform to retrieve threat intelligence data including reports, indicators, malware and threat actors. - **[OpenRPC](https://github.com/shanejonas/openrpc-mpc-server)** - Interact with and discover JSON-RPC APIs via [OpenRPC](https://open-rpc.org). - **[Open Strategy Partners Marketing Tools](https://github.com/open-strategy-partners/osp_marketing_tools)** - Content editing codes, value map, and positioning tools for product marketing. - **[Pandoc](https://github.com/vivekVells/mcp-pandoc)** - MCP server for seamless document format conversion using Pandoc, supporting Markdown, HTML, PDF, DOCX (.docx), csv and more. - **[PIF](https://github.com/hungryrobot1/MCP-PIF)** - A Personal Intelligence Framework (PIF), providing tools for file operations, structured reasoning, and journal-based documentation to support continuity and evolving human-AI collaboration across sessions. - **[Pinecone](https://github.com/sirmews/mcp-pinecone)** - MCP server for searching and uploading records to Pinecone. Allows for simple RAG features, leveraging Pinecone's Inference API. - **[Placid.app](https://github.com/felores/placid-mcp-server)** - Generate image and video creatives using Placid.app templates - **[Playwright](https://github.com/executeautomation/mcp-playwright)** - This MCP Server will help you run browser automation and webscraping using Playwright - **[Postman](https://github.com/shannonlal/mcp-postman)** - MCP server for running Postman Collections locally via Newman. Allows for simple execution of Postman Server and returns the results of whether the collection passed all the tests. - **[Qwen_Max](https://github.com/66julienmartin/MCP-server-Qwen_Max)** - A Model Context Protocol (MCP) server implementation for the Qwen models. - **[RabbitMQ](https://github.com/kenliao94/mcp-server-rabbitmq)** - The MCP server that interacts with RabbitMQ to publish and consume messages. - **[RAG Web Browser](https://github.com/apify/mcp-server-rag-web-browser)** An MCP server for Apify's open-source RAG Web Browser [Actor](https://apify.com/apify/rag-web-browser) to perform web searches, scrape URLs, and return content in Markdown. - **[Reaper](https://github.com/dschuler36/reaper-mcp-server)** - Interact with your [Reaper](https://www.reaper.fm/) (Digital Audio Workstation) projects. - **[Redis](https://github.com/GongRzhe/REDIS-MCP-Server)** - Redis database operations and caching microservice server with support for key-value operations, expiration management, and pattern-based key listing. - **[Redis](https://github.com/prajwalnayak7/mcp-server-redis)** MCP server to interact with Redis Server, AWS Memory DB, etc for caching or other use-cases where in-memory and key-value based storage is appropriate - **[Rememberizer AI](https://github.com/skydeckai/mcp-server-rememberizer)** - An MCP server designed for interacting with the Rememberizer data source, facilitating enhanced knowledge retrieval. - **[Replicate](https://github.com/deepfates/mcp-replicate)** - Search, run and manage machine learning models on Replicate through a simple tool-based interface. Browse models, create predictions, track their status, and handle generated images. - **[Rijksmuseum](https://github.com/r-huijts/rijksmuseum-mcp)** - Interface with the Rijksmuseum API to search artworks, retrieve artwork details, access image tiles, and explore user collections. - **[Salesforce MCP](https://github.com/smn2gnt/MCP-Salesforce)** - Interact with Salesforce Data and Metadata - **[Scholarly](https://github.com/adityak74/mcp-scholarly)** - A MCP server to search for scholarly and academic articles. - **[SearXNG](https://github.com/ihor-sokoliuk/mcp-searxng)** - A Model Context Protocol Server for [SearXNG](https://docs.searxng.org) - **[Snowflake](https://github.com/isaacwasserman/mcp-snowflake-server)** - This MCP server enables LLMs to interact with Snowflake databases, allowing for secure and controlled data operations. - **[Spotify](https://github.com/varunneal/spotify-mcp)** - This MCP allows an LLM to play and use Spotify. - **[Stripe](https://github.com/atharvagupta2003/mcp-stripe)** - This MCP allows integration with Stripe for handling payments, customers, and refunds. - **[TMDB](https://github.com/Laksh-star/mcp-server-tmdb)** - This MCP server integrates with The Movie Database (TMDB) API to provide movie information, search capabilities, and recommendations. - **[Tavily search](https://github.com/RamXX/mcp-tavily)** - An MCP server for Tavily's search & news API, with explicit site inclusions/exclusions - **[Ticketmaster](https://github.com/delorenj/mcp-server-ticketmaster)** - Search for events, venues, and attractions through the Ticketmaster Discovery API - **[Todoist](https://github.com/abhiz123/todoist-mcp-server)** - Interact with Todoist to manage your tasks. - **[Travel Planner](https://github.com/GongRzhe/TRAVEL-PLANNER-MCP-Server)** - Travel planning and itinerary management server integrating with Google Maps API for location search, place details, and route calculations. - **[Vega-Lite](https://github.com/isaacwasserman/mcp-vegalite-server)** - Generate visualizations from fetched data using the VegaLite format and renderer. - **[Video Editor](https://github.com/burningion/video-editing-mcp)** - A Model Context Protocol Server to add, edit, and search videos with [Video Jungle](https://www.video-jungle.com/). - **[WildFly MCP](https://github.com/wildfly-extras/wildfly-mcp)** - WildFly MCP server that enables LLM to interact with running WildFly servers (retrieve metrics, logs, invoke operations, ...). - **[Windows CLI](https://github.com/SimonB97/win-cli-mcp-server)** - MCP server for secure command-line interactions on Windows systems, enabling controlled access to PowerShell, CMD, and Git Bash shells. - **[World Bank data API](https://github.com/anshumax/world_bank_mcp_server)** - A server that fetches data indicators available with the World Bank as part of their data API - **[X (Twitter)](https://github.com/EnesCinr/twitter-mcp)** (by EnesCinr) - Interact with twitter API. Post tweets and search for tweets by query. - **[X (Twitter)](https://github.com/vidhupv/x-mcp)** (by vidhupv) - Create, manage and publish X/Twitter posts directly through Claude chat. - **[XMind](https://github.com/apeyroux/mcp-xmind)** - Read and search through your XMind directory containing XMind files. - **[YouTube](https://github.com/ZubeidHendricks/youtube-mcp-server)** - Comprehensive YouTube API integration for video management, Shorts creation, and analytics. ## 📚 Frameworks These are high-level frameworks that make it easier to build MCP servers or clients. ### For servers * **[EasyMCP](https://github.com/zcaceres/easy-mcp/)** (TypeScript) * **[FastMCP](https://github.com/punkpeye/fastmcp)** (TypeScript) * **[Foxy Contexts](https://github.com/strowk/foxy-contexts)** – A library to build MCP servers in Golang by **[strowk](https://github.com/strowk)** * **[Quarkus MCP Server SDK](https://github.com/quarkiverse/quarkus-mcp-server)** (Java) ### For clients * **[codemirror-mcp](https://github.com/marimo-team/codemirror-mcp)** - CodeMirror extension that implements the Model Context Protocol (MCP) for resource mentions and prompt commands ## 📚 Resources Additional resources on MCP. - **[AiMCP](https://www.aimcp.info)** - A collection of MCP clients&servers to find the right mcp tools by **[Hekmon](https://github.com/hekmon8)** - **[Awesome Crypto MCP Servers by badkk](https://github.com/badkk/awesome-crypto-mcp-servers)** - A curated list of MCP servers by **[Luke Fan](https://github.com/badkk)** - **[Awesome MCP Servers by appcypher](https://github.com/appcypher/awesome-mcp-servers)** - A curated list of MCP servers by **[Stephen Akinyemi](https://github.com/appcypher)** - **[Awesome MCP Servers by punkpeye](https://github.com/punkpeye/awesome-mcp-servers)** (**[website](https://glama.ai/mcp/servers)**) - A curated list of MCP servers by **[Frank Fiegel](https://github.com/punkpeye)** - **[Awesome MCP Servers by wong2](https://github.com/wong2/awesome-mcp-servers)** (**[website](https://mcpservers.org)**) - A curated list of MCP servers by **[wong2](https://github.com/wong2)** - **[Discord Server](https://glama.ai/mcp/discord)** – A community discord server dedicated to MCP by **[Frank Fiegel](https://github.com/punkpeye)** - **[MCP Badges](https://github.com/mcpx-dev/mcp-badges)** – Quickly highlight your MCP project with clear, eye-catching badges, by **[Ironben](https://github.com/nanbingxyz)** - **[MCP Servers Hub](https://github.com/apappascs/mcp-servers-hub)** (**[website](https://mcp-servers-hub-website.pages.dev/)**) - A curated list of MCP servers by **[apappascs](https://github.com/apappascs)** - **[MCP X Community](https://x.com/i/communities/1861891349609603310)** – A X community for MCP by **[Xiaoyi](https://x.com/chxy)** - **[mcp-cli](https://github.com/wong2/mcp-cli)** - A CLI inspector for the Model Context Protocol by **[wong2](https://github.com/wong2)** - **[mcp-get](https://mcp-get.com)** - Command line tool for installing and managing MCP servers by **[Michael Latman](https://github.com/michaellatman)** - **[mcp-manager](https://github.com/zueai/mcp-manager)** - Simple Web UI to install and manage MCP servers for Claude Desktop by **[Zue](https://github.com/zueai)** - **[MCPHub](https://github.com/Jeamee/MCPHub-Desktop)** – An Open Source MacOS & Windows GUI Desktop app for discovering, installing and managing MCP servers by **[Jeamee](https://github.com/jeamee)** - **[mcp.run](https://mcp.run)** - A hosted registry and control plane to install & run secure + portable MCP Servers. - **[Open-Sourced MCP Servers Directory](https://github.com/chatmcp/mcp-directory)** - A curated list of MCP servers by **[mcpso](https://mcp.so)** - <img height="12" width="12" src="https://opentools.com/favicon.ico" alt="OpenTools Logo" /> **[OpenTools](https://opentools.com)** - An open registry for finding, installing, and building with MCP servers by **[opentoolsteam](https://github.com/opentoolsteam)** - **[PulseMCP](https://www.pulsemcp.com)** ([API](https://www.pulsemcp.com/api)) - Community hub & weekly newsletter for discovering MCP servers, clients, articles, and news by **[Tadas Antanavicius](https://github.com/tadasant)**, **[Mike Coughlin](https://github.com/macoughl)**, and **[Ravina Patel](https://github.com/ravinahp)** - **[r/mcp](https://www.reddit.com/r/mcp)** – A Reddit community dedicated to MCP by **[Frank Fiegel](https://github.com/punkpeye)** - **[Smithery](https://smithery.ai/)** - A registry of MCP servers to find the right tools for your LLM agents by **[Henry Mao](https://github.com/calclavia)** - **[Toolbase](https://gettoolbase.ai)** - Desktop application that manages tools and MCP servers with just a few clicks - no coding required by **[gching](https://github.com/gching)** ## 🚀 Getting Started ### Using MCP Servers in this Repository Typescript-based servers in this repository can be used directly with `npx`. For example, this will start the [Memory](src/memory) server: ```sh npx -y @modelcontextprotocol/server-memory ``` Python-based servers in this repository can be used directly with [`uvx`](https://docs.astral.sh/uv/concepts/tools/) or [`pip`](https://pypi.org/project/pip/). `uvx` is recommended for ease of use and setup. For example, this will start the [Git](src/git) server: ```sh # With uvx uvx mcp-server-git # With pip pip install mcp-server-git python -m mcp_server_git ``` Follow [these](https://docs.astral.sh/uv/getting-started/installation/) instructions to install `uv` / `uvx` and [these](https://pip.pypa.io/en/stable/installation/) to install `pip`. ### Using an MCP Client However, running a server on its own isn't very useful, and should instead be configured into an MCP client. For example, here's the Claude Desktop configuration to use the above server: ```json { "mcpServers": { "memory": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-memory"] } } } ``` Additional examples of using the Claude Desktop as an MCP client might look like: ```json { "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/files"] }, "git": { "command": "uvx", "args": ["mcp-server-git", "--repository", "path/to/git/repo"] }, "github": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-github"], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>" } }, "postgres": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"] } } } ``` ## 🛠️ Creating Your Own Server Interested in creating your own MCP server? Visit the official documentation at [modelcontextprotocol.io](https://modelcontextprotocol.io/introduction) for comprehensive guides, best practices, and technical details on implementing MCP servers. ## 🤝 Contributing See [CONTRIBUTING.md](CONTRIBUTING.md) for information about contributing to this repository. ## 🔒 Security See [SECURITY.md](SECURITY.md) for reporting security vulnerabilities. ## 📜 License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## 💬 Community - [GitHub Discussions](https://github.com/orgs/modelcontextprotocol/discussions) ## ⭐ Support If you find MCP servers useful, please consider starring the repository and contributing new servers or improvements! --- Managed by Anthropic, but built together with the community. The Model Context Protocol is open source and we encourage everyone to contribute their own servers and improvements!
{ "source": "modelcontextprotocol/servers", "title": "README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 38612 }
# Security Policy Thank you for helping us keep our MCP servers secure. These servers are maintained by [Anthropic](https://www.anthropic.com/) as part of the Model Context Protocol project. The security of our systems and user data is Anthropic’s top priority. We appreciate the work of security researchers acting in good faith in identifying and reporting potential vulnerabilities. ## Vulnerability Disclosure Program Our Vulnerability Program guidelines are defined on our [HackerOne program page](https://hackerone.com/anthropic-vdp). We ask that any validated vulnerability in this functionality be reported through the [submission form](https://hackerone.com/anthropic-vdp/reports/new?type=team&report_type=vulnerability).
{ "source": "modelcontextprotocol/servers", "title": "SECURITY.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/SECURITY.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 735 }
<!-- Provide a brief description of your changes --> ## Description ## Server Details <!-- If modifying an existing server, provide details --> - Server: <!-- e.g., filesystem, github --> - Changes to: <!-- e.g., tools, resources, prompts --> ## Motivation and Context <!-- Why is this change needed? What problem does it solve? --> ## How Has This Been Tested? <!-- Have you tested this with an LLM client? Which scenarios were tested? --> ## Breaking Changes <!-- Will users need to update their MCP client configurations? --> ## Types of changes <!-- What types of changes does your code introduce? Put an `x` in all the boxes that apply: --> - [ ] Bug fix (non-breaking change which fixes an issue) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update ## Checklist <!-- Go over all the following points, and put an `x` in all the boxes that apply. --> - [ ] I have read the [MCP Protocol Documentation](https://modelcontextprotocol.io) - [ ] My changes follows MCP security best practices - [ ] I have updated the server's README accordingly - [ ] I have tested this with an LLM client - [ ] My code follows the repository's style guidelines - [ ] New and existing tests pass locally - [ ] I have added appropriate error handling - [ ] I have documented all environment variables and configuration options ## Additional context <!-- Add any other context, implementation notes, or design decisions -->
{ "source": "modelcontextprotocol/servers", "title": ".github/pull_request_template.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/.github/pull_request_template.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 1541 }
# AWS Knowledge Base Retrieval MCP Server An MCP server implementation for retrieving information from the AWS Knowledge Base using the Bedrock Agent Runtime. ## Features - **RAG (Retrieval-Augmented Generation)**: Retrieve context from the AWS Knowledge Base based on a query and a Knowledge Base ID. - **Supports multiple results retrieval**: Option to retrieve a customizable number of results. ## Tools - **retrieve_from_aws_kb** - Perform retrieval operations using the AWS Knowledge Base. - Inputs: - `query` (string): The search query for retrieval. - `knowledgeBaseId` (string): The ID of the AWS Knowledge Base. - `n` (number, optional): Number of results to retrieve (default: 3). ## Configuration ### Setting up AWS Credentials 1. Obtain AWS access key ID, secret access key, and region from the AWS Management Console. 2. Ensure these credentials have appropriate permissions for Bedrock Agent Runtime operations. ### Usage with Claude Desktop Add this to your `claude_desktop_config.json`: #### Docker ```json { "mcpServers": { "aws-kb-retrieval": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "AWS_ACCESS_KEY_ID", "-e", "AWS_SECRET_ACCESS_KEY", "-e", "AWS_REGION", "mcp/aws-kb-retrieval-server" ], "env": { "AWS_ACCESS_KEY_ID": "YOUR_ACCESS_KEY_HERE", "AWS_SECRET_ACCESS_KEY": "YOUR_SECRET_ACCESS_KEY_HERE", "AWS_REGION": "YOUR_AWS_REGION_HERE" } } } } ``` ```json { "mcpServers": { "aws-kb-retrieval": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-aws-kb-retrieval" ], "env": { "AWS_ACCESS_KEY_ID": "YOUR_ACCESS_KEY_HERE", "AWS_SECRET_ACCESS_KEY": "YOUR_SECRET_ACCESS_KEY_HERE", "AWS_REGION": "YOUR_AWS_REGION_HERE" } } } } ``` ## Building Docker: ```sh docker build -t mcp/aws-kb-retrieval -f src/aws-kb-retrieval-server/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository. This README assumes that your server package is named `@modelcontextprotocol/server-aws-kb-retrieval`. Adjust the package name and installation details if they differ in your setup. Also, ensure that your server script is correctly built and that all dependencies are properly managed in your `package.json`.
{ "source": "modelcontextprotocol/servers", "title": "src/aws-kb-retrieval-server/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/aws-kb-retrieval-server/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 2535 }
# Brave Search MCP Server An MCP server implementation that integrates the Brave Search API, providing both web and local search capabilities. ## Features - **Web Search**: General queries, news, articles, with pagination and freshness controls - **Local Search**: Find businesses, restaurants, and services with detailed information - **Flexible Filtering**: Control result types, safety levels, and content freshness - **Smart Fallbacks**: Local search automatically falls back to web when no results are found ## Tools - **brave_web_search** - Execute web searches with pagination and filtering - Inputs: - `query` (string): Search terms - `count` (number, optional): Results per page (max 20) - `offset` (number, optional): Pagination offset (max 9) - **brave_local_search** - Search for local businesses and services - Inputs: - `query` (string): Local search terms - `count` (number, optional): Number of results (max 20) - Automatically falls back to web search if no local results found ## Configuration ### Getting an API Key 1. Sign up for a [Brave Search API account](https://brave.com/search/api/) 2. Choose a plan (Free tier available with 2,000 queries/month) 3. Generate your API key [from the developer dashboard](https://api.search.brave.com/app/keys) ### Usage with Claude Desktop Add this to your `claude_desktop_config.json`: ### Docker ```json { "mcpServers": { "brave-search": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "BRAVE_API_KEY", "mcp/brave-search" ], "env": { "BRAVE_API_KEY": "YOUR_API_KEY_HERE" } } } } ``` ### NPX ```json { "mcpServers": { "brave-search": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-brave-search" ], "env": { "BRAVE_API_KEY": "YOUR_API_KEY_HERE" } } } } ``` ## Build Docker build: ```bash docker build -t mcp/brave-search:latest -f src/brave-search/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/brave-search/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/brave-search/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 2328 }
# EverArt MCP Server Image generation server for Claude Desktop using EverArt's API. ## Install ```bash npm install export EVERART_API_KEY=your_key_here ``` ## Config Add to Claude Desktop config: ### Docker ```json { "mcpServers": { "everart": { "command": "docker", "args": ["run", "-i", "--rm", "-e", "EVERART_API_KEY", "mcp/everart"], "env": { "EVERART_API_KEY": "your_key_here" } } } } ``` ### NPX ```json { "mcpServers": { "everart": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-everart"], "env": { "EVERART_API_KEY": "your_key_here" } } } } ``` ## Tools ### generate_image Generates images with multiple model options. Opens result in browser and returns URL. Parameters: ```typescript { prompt: string, // Image description model?: string, // Model ID (default: "207910310772879360") image_count?: number // Number of images (default: 1) } ``` Models: - 5000: FLUX1.1 (standard) - 9000: FLUX1.1-ultra - 6000: SD3.5 - 7000: Recraft-Real - 8000: Recraft-Vector All images generated at 1024x1024. Sample usage: ```javascript const result = await client.callTool({ name: "generate_image", arguments: { prompt: "A cat sitting elegantly", model: "7000", image_count: 1 } }); ``` Response format: ``` Image generated successfully! The image has been opened in your default browser. Generation details: - Model: 7000 - Prompt: "A cat sitting elegantly" - Image URL: https://storage.googleapis.com/... You can also click the URL above to view the image again. ``` ## Building w/ Docker ```sh docker build -t mcp/everart -f src/everart/Dockerfile . ```
{ "source": "modelcontextprotocol/servers", "title": "src/everart/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/everart/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 1713 }
# Everything MCP Server This MCP server attempts to exercise all the features of the MCP protocol. It is not intended to be a useful server, but rather a test server for builders of MCP clients. It implements prompts, tools, resources, sampling, and more to showcase MCP capabilities. ## Components ### Tools 1. `echo` - Simple tool to echo back input messages - Input: - `message` (string): Message to echo back - Returns: Text content with echoed message 2. `add` - Adds two numbers together - Inputs: - `a` (number): First number - `b` (number): Second number - Returns: Text result of the addition 3. `longRunningOperation` - Demonstrates progress notifications for long operations - Inputs: - `duration` (number, default: 10): Duration in seconds - `steps` (number, default: 5): Number of progress steps - Returns: Completion message with duration and steps - Sends progress notifications during execution 4. `sampleLLM` - Demonstrates LLM sampling capability using MCP sampling feature - Inputs: - `prompt` (string): The prompt to send to the LLM - `maxTokens` (number, default: 100): Maximum tokens to generate - Returns: Generated LLM response 5. `getTinyImage` - Returns a small test image - No inputs required - Returns: Base64 encoded PNG image data 6. `printEnv` - Prints all environment variables - Useful for debugging MCP server configuration - No inputs required - Returns: JSON string of all environment variables 7. `annotatedMessage` - Demonstrates how annotations can be used to provide metadata about content - Inputs: - `messageType` (enum: "error" | "success" | "debug"): Type of message to demonstrate different annotation patterns - `includeImage` (boolean, default: false): Whether to include an example image - Returns: Content with varying annotations: - Error messages: High priority (1.0), visible to both user and assistant - Success messages: Medium priority (0.7), user-focused - Debug messages: Low priority (0.3), assistant-focused - Optional image: Medium priority (0.5), user-focused - Example annotations: ```json { "priority": 1.0, "audience": ["user", "assistant"] } ``` ### Resources The server provides 100 test resources in two formats: - Even numbered resources: - Plaintext format - URI pattern: `test://static/resource/{even_number}` - Content: Simple text description - Odd numbered resources: - Binary blob format - URI pattern: `test://static/resource/{odd_number}` - Content: Base64 encoded binary data Resource features: - Supports pagination (10 items per page) - Allows subscribing to resource updates - Demonstrates resource templates - Auto-updates subscribed resources every 5 seconds ### Prompts 1. `simple_prompt` - Basic prompt without arguments - Returns: Single message exchange 2. `complex_prompt` - Advanced prompt demonstrating argument handling - Required arguments: - `temperature` (number): Temperature setting - Optional arguments: - `style` (string): Output style preference - Returns: Multi-turn conversation with images ## Usage with Claude Desktop Add to your `claude_desktop_config.json`: ```json { "mcpServers": { "everything": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-everything" ] } } } ```
{ "source": "modelcontextprotocol/servers", "title": "src/everything/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/everything/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 3468 }
# Fetch MCP Server A Model Context Protocol server that provides web content fetching capabilities. This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption. The fetch tool will truncate the response, but by using the `start_index` argument, you can specify where to start the content extraction. This lets models read a webpage in chunks, until they find the information they need. ### Available Tools - `fetch` - Fetches a URL from the internet and extracts its contents as markdown. - `url` (string, required): URL to fetch - `max_length` (integer, optional): Maximum number of characters to return (default: 5000) - `start_index` (integer, optional): Start content from this character index (default: 0) - `raw` (boolean, optional): Get raw content without markdown conversion (default: false) ### Prompts - **fetch** - Fetch a URL and extract its contents as markdown - Arguments: - `url` (string, required): URL to fetch ## Installation Optionally: Install node.js, this will cause the fetch server to use a different HTML simplifier that is more robust. ### Using uv (recommended) When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will use [`uvx`](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-fetch*. ### Using PIP Alternatively you can install `mcp-server-fetch` via pip: ``` pip install mcp-server-fetch ``` After installation, you can run it as a script using: ``` python -m mcp_server_fetch ``` ## Configuration ### Configure for Claude.app Add to your Claude settings: <details> <summary>Using uvx</summary> ```json "mcpServers": { "fetch": { "command": "uvx", "args": ["mcp-server-fetch"] } } ``` </details> <details> <summary>Using docker</summary> ```json "mcpServers": { "fetch": { "command": "docker", "args": ["run", "-i", "--rm", "mcp/fetch"] } } ``` </details> <details> <summary>Using pip installation</summary> ```json "mcpServers": { "fetch": { "command": "python", "args": ["-m", "mcp_server_fetch"] } } ``` </details> ### Customization - robots.txt By default, the server will obey a websites robots.txt file if the request came from the model (via a tool), but not if the request was user initiated (via a prompt). This can be disabled by adding the argument `--ignore-robots-txt` to the `args` list in the configuration. ### Customization - User-agent By default, depending on if the request came from the model (via a tool), or was user initiated (via a prompt), the server will use either the user-agent ``` ModelContextProtocol/1.0 (Autonomous; +https://github.com/modelcontextprotocol/servers) ``` or ``` ModelContextProtocol/1.0 (User-Specified; +https://github.com/modelcontextprotocol/servers) ``` This can be customized by adding the argument `--user-agent=YourUserAgent` to the `args` list in the configuration. ## Debugging You can use the MCP inspector to debug the server. For uvx installations: ``` npx @modelcontextprotocol/inspector uvx mcp-server-fetch ``` Or if you've installed the package in a specific directory or are developing on it: ``` cd path/to/servers/src/fetch npx @modelcontextprotocol/inspector uv run mcp-server-fetch ``` ## Contributing We encourage contributions to help expand and improve mcp-server-fetch. Whether you want to add new tools, enhance existing functionality, or improve documentation, your input is valuable. For examples of other MCP servers and implementation patterns, see: https://github.com/modelcontextprotocol/servers Pull requests are welcome! Feel free to contribute new ideas, bug fixes, or enhancements to make mcp-server-fetch even more powerful and useful. ## License mcp-server-fetch is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/fetch/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/fetch/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 4033 }
# Filesystem MCP Server Node.js server implementing Model Context Protocol (MCP) for filesystem operations. ## Features - Read/write files - Create/list/delete directories - Move files/directories - Search files - Get file metadata **Note**: The server will only allow operations within directories specified via `args`. ## API ### Resources - `file://system`: File system operations interface ### Tools - **read_file** - Read complete contents of a file - Input: `path` (string) - Reads complete file contents with UTF-8 encoding - **read_multiple_files** - Read multiple files simultaneously - Input: `paths` (string[]) - Failed reads won't stop the entire operation - **write_file** - Create new file or overwrite existing (exercise caution with this) - Inputs: - `path` (string): File location - `content` (string): File content - **edit_file** - Make selective edits using advanced pattern matching and formatting - Features: - Line-based and multi-line content matching - Whitespace normalization with indentation preservation - Fuzzy matching with confidence scoring - Multiple simultaneous edits with correct positioning - Indentation style detection and preservation - Git-style diff output with context - Preview changes with dry run mode - Failed match debugging with confidence scores - Inputs: - `path` (string): File to edit - `edits` (array): List of edit operations - `oldText` (string): Text to search for (can be substring) - `newText` (string): Text to replace with - `dryRun` (boolean): Preview changes without applying (default: false) - `options` (object): Optional formatting settings - `preserveIndentation` (boolean): Keep existing indentation (default: true) - `normalizeWhitespace` (boolean): Normalize spaces while preserving structure (default: true) - `partialMatch` (boolean): Enable fuzzy matching (default: true) - Returns detailed diff and match information for dry runs, otherwise applies changes - Best Practice: Always use dryRun first to preview changes before applying them - **create_directory** - Create new directory or ensure it exists - Input: `path` (string) - Creates parent directories if needed - Succeeds silently if directory exists - **list_directory** - List directory contents with [FILE] or [DIR] prefixes - Input: `path` (string) - **move_file** - Move or rename files and directories - Inputs: - `source` (string) - `destination` (string) - Fails if destination exists - **search_files** - Recursively search for files/directories - Inputs: - `path` (string): Starting directory - `pattern` (string): Search pattern - `excludePatterns` (string[]): Exclude any patterns. Glob formats are supported. - Case-insensitive matching - Returns full paths to matches - **get_file_info** - Get detailed file/directory metadata - Input: `path` (string) - Returns: - Size - Creation time - Modified time - Access time - Type (file/directory) - Permissions - **list_allowed_directories** - List all directories the server is allowed to access - No input required - Returns: - Directories that this server can read/write from ## Usage with Claude Desktop Add this to your `claude_desktop_config.json`: Note: you can provide sandboxed directories to the server by mounting them to `/projects`. Adding the `ro` flag will make the directory readonly by the server. ### Docker Note: all directories must be mounted to `/projects` by default. ```json { "mcpServers": { "filesystem": { "command": "docker", "args": [ "run", "-i", "--rm", "--mount", "type=bind,src=/Users/username/Desktop,dst=/projects/Desktop", "--mount", "type=bind,src=/path/to/other/allowed/dir,dst=/projects/other/allowed/dir,ro", "--mount", "type=bind,src=/path/to/file.txt,dst=/projects/path/to/file.txt", "mcp/filesystem", "/projects" ] } } } ``` ### NPX ```json { "mcpServers": { "filesystem": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-filesystem", "/Users/username/Desktop", "/path/to/other/allowed/dir" ] } } } ``` ## Build Docker build: ```bash docker build -t mcp/filesystem -f src/filesystem/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/filesystem/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/filesystem/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 4691 }
# Google Drive server This MCP server integrates with Google Drive to allow listing, reading, and searching over files. ## Components ### Tools - **search** - Search for files in Google Drive - Input: `query` (string): Search query - Returns file names and MIME types of matching files ### Resources The server provides access to Google Drive files: - **Files** (`gdrive:///<file_id>`) - Supports all file types - Google Workspace files are automatically exported: - Docs → Markdown - Sheets → CSV - Presentations → Plain text - Drawings → PNG - Other files are provided in their native format ## Getting started 1. [Create a new Google Cloud project](https://console.cloud.google.com/projectcreate) 2. [Enable the Google Drive API](https://console.cloud.google.com/workspace-api/products) 3. [Configure an OAuth consent screen](https://console.cloud.google.com/apis/credentials/consent) ("internal" is fine for testing) 4. Add OAuth scope `https://www.googleapis.com/auth/drive.readonly` 5. [Create an OAuth Client ID](https://console.cloud.google.com/apis/credentials/oauthclient) for application type "Desktop App" 6. Download the JSON file of your client's OAuth keys 7. Rename the key file to `gcp-oauth.keys.json` and place into the root of this repo (i.e. `servers/gcp-oauth.keys.json`) Make sure to build the server with either `npm run build` or `npm run watch`. ### Authentication To authenticate and save credentials: 1. Run the server with the `auth` argument: `node ./dist auth` 2. This will open an authentication flow in your system browser 3. Complete the authentication process 4. Credentials will be saved in the root of this repo (i.e. `servers/.gdrive-server-credentials.json`) ### Usage with Desktop App To integrate this server with the desktop app, add the following to your app's server configuration: #### Docker Authentication: Assuming you have completed setting up the OAuth application on Google Cloud, you can now auth the server with the following command, replacing `/path/to/gcp-oauth.keys.json` with the path to your OAuth keys file: ```bash docker run -i --rm --mount type=bind,source=/path/to/gcp-oauth.keys.json,target=/gcp-oauth.keys.json -v mcp-gdrive:/gdrive-server -e GDRIVE_OAUTH_PATH=/gcp-oauth.keys.json -e "GDRIVE_CREDENTIALS_PATH=/gdrive-server/credentials.json" -p 3000:3000 mcp/gdrive auth ``` The command will print the URL to open in your browser. Open this URL in your browser and complete the authentication process. The credentials will be saved in the `mcp-gdrive` volume. Once authenticated, you can use the server in your app's server configuration: ```json { "mcpServers": { "gdrive": { "command": "docker", "args": ["run", "-i", "--rm", "-v", "mcp-gdrive:/gdrive-server", "-e", "GDRIVE_CREDENTIALS_PATH=/gdrive-server/credentials.json", "mcp/gdrive"] } } } ``` #### NPX ```json { "mcpServers": { "gdrive": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-gdrive" ] } } } ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/gdrive/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/gdrive/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 3333 }
# mcp-server-git: A git MCP server ## Overview A Model Context Protocol server for Git repository interaction and automation. This server provides tools to read, search, and manipulate Git repositories via Large Language Models. Please note that mcp-server-git is currently in early development. The functionality and available tools are subject to change and expansion as we continue to develop and improve the server. ### Tools 1. `git_status` - Shows the working tree status - Input: - `repo_path` (string): Path to Git repository - Returns: Current status of working directory as text output 2. `git_diff_unstaged` - Shows changes in working directory not yet staged - Input: - `repo_path` (string): Path to Git repository - Returns: Diff output of unstaged changes 3. `git_diff_staged` - Shows changes that are staged for commit - Input: - `repo_path` (string): Path to Git repository - Returns: Diff output of staged changes 4. `git_diff` - Shows differences between branches or commits - Inputs: - `repo_path` (string): Path to Git repository - `target` (string): Target branch or commit to compare with - Returns: Diff output comparing current state with target 5. `git_commit` - Records changes to the repository - Inputs: - `repo_path` (string): Path to Git repository - `message` (string): Commit message - Returns: Confirmation with new commit hash 6. `git_add` - Adds file contents to the staging area - Inputs: - `repo_path` (string): Path to Git repository - `files` (string[]): Array of file paths to stage - Returns: Confirmation of staged files 7. `git_reset` - Unstages all staged changes - Input: - `repo_path` (string): Path to Git repository - Returns: Confirmation of reset operation 8. `git_log` - Shows the commit logs - Inputs: - `repo_path` (string): Path to Git repository - `max_count` (number, optional): Maximum number of commits to show (default: 10) - Returns: Array of commit entries with hash, author, date, and message 9. `git_create_branch` - Creates a new branch - Inputs: - `repo_path` (string): Path to Git repository - `branch_name` (string): Name of the new branch - `start_point` (string, optional): Starting point for the new branch - Returns: Confirmation of branch creation 10. `git_checkout` - Switches branches - Inputs: - `repo_path` (string): Path to Git repository - `branch_name` (string): Name of branch to checkout - Returns: Confirmation of branch switch 11. `git_show` - Shows the contents of a commit - Inputs: - `repo_path` (string): Path to Git repository - `revision` (string): The revision (commit hash, branch name, tag) to show - Returns: Contents of the specified commit 12. `git_init` - Initializes a Git repository - Inputs: - `repo_path` (string): Path to directory to initialize git repo - Returns: Confirmation of repository initialization ## Installation ### Using uv (recommended) When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will use [`uvx`](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-git*. ### Using PIP Alternatively you can install `mcp-server-git` via pip: ``` pip install mcp-server-git ``` After installation, you can run it as a script using: ``` python -m mcp_server_git ``` ## Configuration ### Usage with Claude Desktop Add this to your `claude_desktop_config.json`: <details> <summary>Using uvx</summary> ```json "mcpServers": { "git": { "command": "uvx", "args": ["mcp-server-git", "--repository", "path/to/git/repo"] } } ``` </details> <details> <summary>Using docker</summary> * Note: replace '/Users/username' with the a path that you want to be accessible by this tool ```json "mcpServers": { "git": { "command": "docker", "args": ["run", "--rm", "-i", "--mount", "type=bind,src=/Users/username,dst=/Users/username", "mcp/git"] } } ``` </details> <details> <summary>Using pip installation</summary> ```json "mcpServers": { "git": { "command": "python", "args": ["-m", "mcp_server_git", "--repository", "path/to/git/repo"] } } ``` </details> ### Usage with [Zed](https://github.com/zed-industries/zed) Add to your Zed settings.json: <details> <summary>Using uvx</summary> ```json "context_servers": [ "mcp-server-git": { "command": { "path": "uvx", "args": ["mcp-server-git"] } } ], ``` </details> <details> <summary>Using pip installation</summary> ```json "context_servers": { "mcp-server-git": { "command": { "path": "python", "args": ["-m", "mcp_server_git"] } } }, ``` </details> ## Debugging You can use the MCP inspector to debug the server. For uvx installations: ``` npx @modelcontextprotocol/inspector uvx mcp-server-git ``` Or if you've installed the package in a specific directory or are developing on it: ``` cd path/to/servers/src/git npx @modelcontextprotocol/inspector uv run mcp-server-git ``` Running `tail -n 20 -f ~/Library/Logs/Claude/mcp*.log` will show the logs from the server and may help you debug any issues. ## Development If you are doing local development, there are two ways to test your changes: 1. Run the MCP inspector to test your changes. See [Debugging](#debugging) for run instructions. 2. Test using the Claude desktop app. Add the following to your `claude_desktop_config.json`: ### Docker ```json { "mcpServers": { "git": { "command": "docker", "args": [ "run", "--rm", "-i", "--mount", "type=bind,src=/Users/username/Desktop,dst=/projects/Desktop", "--mount", "type=bind,src=/path/to/other/allowed/dir,dst=/projects/other/allowed/dir,ro", "--mount", "type=bind,src=/path/to/file.txt,dst=/projects/path/to/file.txt", "mcp/git" ] } } } ``` ### UVX ```json { "mcpServers": { "git": { "command": "uv", "args": [ "--directory", "/<path to mcp-servers>/mcp-servers/src/git", "run", "mcp-server-git" ] } } ``` ## Build Docker build: ```bash cd src/git docker build -t mcp/git . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/git/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/git/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 6504 }
# GitHub MCP Server MCP Server for the GitHub API, enabling file operations, repository management, search functionality, and more. ### Features - **Automatic Branch Creation**: When creating/updating files or pushing changes, branches are automatically created if they don't exist - **Comprehensive Error Handling**: Clear error messages for common issues - **Git History Preservation**: Operations maintain proper Git history without force pushing - **Batch Operations**: Support for both single-file and multi-file operations - **Advanced Search**: Support for searching code, issues/PRs, and users ## Tools 1. `create_or_update_file` - Create or update a single file in a repository - Inputs: - `owner` (string): Repository owner (username or organization) - `repo` (string): Repository name - `path` (string): Path where to create/update the file - `content` (string): Content of the file - `message` (string): Commit message - `branch` (string): Branch to create/update the file in - `sha` (optional string): SHA of file being replaced (for updates) - Returns: File content and commit details 2. `push_files` - Push multiple files in a single commit - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `branch` (string): Branch to push to - `files` (array): Files to push, each with `path` and `content` - `message` (string): Commit message - Returns: Updated branch reference 3. `search_repositories` - Search for GitHub repositories - Inputs: - `query` (string): Search query - `page` (optional number): Page number for pagination - `perPage` (optional number): Results per page (max 100) - Returns: Repository search results 4. `create_repository` - Create a new GitHub repository - Inputs: - `name` (string): Repository name - `description` (optional string): Repository description - `private` (optional boolean): Whether repo should be private - `autoInit` (optional boolean): Initialize with README - Returns: Created repository details 5. `get_file_contents` - Get contents of a file or directory - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `path` (string): Path to file/directory - `branch` (optional string): Branch to get contents from - Returns: File/directory contents 6. `create_issue` - Create a new issue - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `title` (string): Issue title - `body` (optional string): Issue description - `assignees` (optional string[]): Usernames to assign - `labels` (optional string[]): Labels to add - `milestone` (optional number): Milestone number - Returns: Created issue details 7. `create_pull_request` - Create a new pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `title` (string): PR title - `body` (optional string): PR description - `head` (string): Branch containing changes - `base` (string): Branch to merge into - `draft` (optional boolean): Create as draft PR - `maintainer_can_modify` (optional boolean): Allow maintainer edits - Returns: Created pull request details 8. `fork_repository` - Fork a repository - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `organization` (optional string): Organization to fork to - Returns: Forked repository details 9. `create_branch` - Create a new branch - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `branch` (string): Name for new branch - `from_branch` (optional string): Source branch (defaults to repo default) - Returns: Created branch reference 10. `list_issues` - List and filter repository issues - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `state` (optional string): Filter by state ('open', 'closed', 'all') - `labels` (optional string[]): Filter by labels - `sort` (optional string): Sort by ('created', 'updated', 'comments') - `direction` (optional string): Sort direction ('asc', 'desc') - `since` (optional string): Filter by date (ISO 8601 timestamp) - `page` (optional number): Page number - `per_page` (optional number): Results per page - Returns: Array of issue details 11. `update_issue` - Update an existing issue - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `issue_number` (number): Issue number to update - `title` (optional string): New title - `body` (optional string): New description - `state` (optional string): New state ('open' or 'closed') - `labels` (optional string[]): New labels - `assignees` (optional string[]): New assignees - `milestone` (optional number): New milestone number - Returns: Updated issue details 12. `add_issue_comment` - Add a comment to an issue - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `issue_number` (number): Issue number to comment on - `body` (string): Comment text - Returns: Created comment details 13. `search_code` - Search for code across GitHub repositories - Inputs: - `q` (string): Search query using GitHub code search syntax - `sort` (optional string): Sort field ('indexed' only) - `order` (optional string): Sort order ('asc' or 'desc') - `per_page` (optional number): Results per page (max 100) - `page` (optional number): Page number - Returns: Code search results with repository context 14. `search_issues` - Search for issues and pull requests - Inputs: - `q` (string): Search query using GitHub issues search syntax - `sort` (optional string): Sort field (comments, reactions, created, etc.) - `order` (optional string): Sort order ('asc' or 'desc') - `per_page` (optional number): Results per page (max 100) - `page` (optional number): Page number - Returns: Issue and pull request search results 15. `search_users` - Search for GitHub users - Inputs: - `q` (string): Search query using GitHub users search syntax - `sort` (optional string): Sort field (followers, repositories, joined) - `order` (optional string): Sort order ('asc' or 'desc') - `per_page` (optional number): Results per page (max 100) - `page` (optional number): Page number - Returns: User search results 16. `list_commits` - Gets commits of a branch in a repository - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `page` (optional string): page number - `per_page` (optional string): number of record per page - `sha` (optional string): branch name - Returns: List of commits 17. `get_issue` - Gets the contents of an issue within a repository - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `issue_number` (number): Issue number to retrieve - Returns: Github Issue object & details 18. `get_pull_request` - Get details of a specific pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - Returns: Pull request details including diff and review status 19. `list_pull_requests` - List and filter repository pull requests - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `state` (optional string): Filter by state ('open', 'closed', 'all') - `head` (optional string): Filter by head user/org and branch - `base` (optional string): Filter by base branch - `sort` (optional string): Sort by ('created', 'updated', 'popularity', 'long-running') - `direction` (optional string): Sort direction ('asc', 'desc') - `per_page` (optional number): Results per page (max 100) - `page` (optional number): Page number - Returns: Array of pull request details 20. `create_pull_request_review` - Create a review on a pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - `body` (string): Review comment text - `event` (string): Review action ('APPROVE', 'REQUEST_CHANGES', 'COMMENT') - `commit_id` (optional string): SHA of commit to review - `comments` (optional array): Line-specific comments, each with: - `path` (string): File path - `position` (number): Line position in diff - `body` (string): Comment text - Returns: Created review details 21. `merge_pull_request` - Merge a pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - `commit_title` (optional string): Title for merge commit - `commit_message` (optional string): Extra detail for merge commit - `merge_method` (optional string): Merge method ('merge', 'squash', 'rebase') - Returns: Merge result details 22. `get_pull_request_files` - Get the list of files changed in a pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - Returns: Array of changed files with patch and status details 23. `get_pull_request_status` - Get the combined status of all status checks for a pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - Returns: Combined status check results and individual check details 24. `update_pull_request_branch` - Update a pull request branch with the latest changes from the base branch (equivalent to GitHub's "Update branch" button) - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - `expected_head_sha` (optional string): The expected SHA of the pull request's HEAD ref - Returns: Success message when branch is updated 25. `get_pull_request_comments` - Get the review comments on a pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - Returns: Array of pull request review comments with details like the comment text, author, and location in the diff 26. `get_pull_request_reviews` - Get the reviews on a pull request - Inputs: - `owner` (string): Repository owner - `repo` (string): Repository name - `pull_number` (number): Pull request number - Returns: Array of pull request reviews with details like the review state (APPROVED, CHANGES_REQUESTED, etc.), reviewer, and review body ## Search Query Syntax ### Code Search - `language:javascript`: Search by programming language - `repo:owner/name`: Search in specific repository - `path:app/src`: Search in specific path - `extension:js`: Search by file extension - Example: `q: "import express" language:typescript path:src/` ### Issues Search - `is:issue` or `is:pr`: Filter by type - `is:open` or `is:closed`: Filter by state - `label:bug`: Search by label - `author:username`: Search by author - Example: `q: "memory leak" is:issue is:open label:bug` ### Users Search - `type:user` or `type:org`: Filter by account type - `followers:>1000`: Filter by followers - `location:London`: Search by location - Example: `q: "fullstack developer" location:London followers:>100` For detailed search syntax, see [GitHub's searching documentation](https://docs.github.com/en/search-github/searching-on-github). ## Setup ### Personal Access Token [Create a GitHub Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens) with appropriate permissions: - Go to [Personal access tokens](https://github.com/settings/tokens) (in GitHub Settings > Developer settings) - Select which repositories you'd like this token to have access to (Public, All, or Select) - Create a token with the `repo` scope ("Full control of private repositories") - Alternatively, if working only with public repositories, select only the `public_repo` scope - Copy the generated token ### Usage with Claude Desktop To use this with Claude Desktop, add the following to your `claude_desktop_config.json`: #### Docker ```json { "mcpServers": { "github": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GITHUB_PERSONAL_ACCESS_TOKEN", "mcp/github" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>" } } } } ``` ### NPX ```json { "mcpServers": { "github": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-github" ], "env": { "GITHUB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>" } } } } ``` ## Build Docker build: ```bash docker build -t mcp/github -f src/github/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/github/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/github/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 13757 }
# GitLab MCP Server MCP Server for the GitLab API, enabling project management, file operations, and more. ### Features - **Automatic Branch Creation**: When creating/updating files or pushing changes, branches are automatically created if they don't exist - **Comprehensive Error Handling**: Clear error messages for common issues - **Git History Preservation**: Operations maintain proper Git history without force pushing - **Batch Operations**: Support for both single-file and multi-file operations ## Tools 1. `create_or_update_file` - Create or update a single file in a project - Inputs: - `project_id` (string): Project ID or URL-encoded path - `file_path` (string): Path where to create/update the file - `content` (string): Content of the file - `commit_message` (string): Commit message - `branch` (string): Branch to create/update the file in - `previous_path` (optional string): Path of the file to move/rename - Returns: File content and commit details 2. `push_files` - Push multiple files in a single commit - Inputs: - `project_id` (string): Project ID or URL-encoded path - `branch` (string): Branch to push to - `files` (array): Files to push, each with `file_path` and `content` - `commit_message` (string): Commit message - Returns: Updated branch reference 3. `search_repositories` - Search for GitLab projects - Inputs: - `search` (string): Search query - `page` (optional number): Page number for pagination - `per_page` (optional number): Results per page (default 20) - Returns: Project search results 4. `create_repository` - Create a new GitLab project - Inputs: - `name` (string): Project name - `description` (optional string): Project description - `visibility` (optional string): 'private', 'internal', or 'public' - `initialize_with_readme` (optional boolean): Initialize with README - Returns: Created project details 5. `get_file_contents` - Get contents of a file or directory - Inputs: - `project_id` (string): Project ID or URL-encoded path - `file_path` (string): Path to file/directory - `ref` (optional string): Branch/tag/commit to get contents from - Returns: File/directory contents 6. `create_issue` - Create a new issue - Inputs: - `project_id` (string): Project ID or URL-encoded path - `title` (string): Issue title - `description` (optional string): Issue description - `assignee_ids` (optional number[]): User IDs to assign - `labels` (optional string[]): Labels to add - `milestone_id` (optional number): Milestone ID - Returns: Created issue details 7. `create_merge_request` - Create a new merge request - Inputs: - `project_id` (string): Project ID or URL-encoded path - `title` (string): MR title - `description` (optional string): MR description - `source_branch` (string): Branch containing changes - `target_branch` (string): Branch to merge into - `draft` (optional boolean): Create as draft MR - `allow_collaboration` (optional boolean): Allow commits from upstream members - Returns: Created merge request details 8. `fork_repository` - Fork a project - Inputs: - `project_id` (string): Project ID or URL-encoded path - `namespace` (optional string): Namespace to fork to - Returns: Forked project details 9. `create_branch` - Create a new branch - Inputs: - `project_id` (string): Project ID or URL-encoded path - `branch` (string): Name for new branch - `ref` (optional string): Source branch/commit for new branch - Returns: Created branch reference ## Setup ### Personal Access Token [Create a GitLab Personal Access Token](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) with appropriate permissions: - Go to User Settings > Access Tokens in GitLab - Select the required scopes: - `api` for full API access - `read_api` for read-only access - `read_repository` and `write_repository` for repository operations - Create the token and save it securely ### Usage with Claude Desktop Add the following to your `claude_desktop_config.json`: #### Docker ```json { "mcpServers": { "gitlab": { "command": "docker", "args": [ "run", "--rm", "-i", "-e", "GITLAB_PERSONAL_ACCESS_TOKEN", "-e", "GITLAB_API_URL", "mcp/gitlab" ], "env": { "GITLAB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>", "GITLAB_API_URL": "https://gitlab.com/api/v4" // Optional, for self-hosted instances } } } } ``` ### NPX ```json { "mcpServers": { "gitlab": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-gitlab" ], "env": { "GITLAB_PERSONAL_ACCESS_TOKEN": "<YOUR_TOKEN>", "GITLAB_API_URL": "https://gitlab.com/api/v4" // Optional, for self-hosted instances } } } } ``` ## Build Docker build: ```bash docker build -t vonwig/gitlab:mcp -f src/gitlab/Dockerfile . ``` ## Environment Variables - `GITLAB_PERSONAL_ACCESS_TOKEN`: Your GitLab personal access token (required) - `GITLAB_API_URL`: Base URL for GitLab API (optional, defaults to `https://gitlab.com/api/v4`) ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/gitlab/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/gitlab/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 5607 }
# Google Maps MCP Server MCP Server for the Google Maps API. ## Tools 1. `maps_geocode` - Convert address to coordinates - Input: `address` (string) - Returns: location, formatted_address, place_id 2. `maps_reverse_geocode` - Convert coordinates to address - Inputs: - `latitude` (number) - `longitude` (number) - Returns: formatted_address, place_id, address_components 3. `maps_search_places` - Search for places using text query - Inputs: - `query` (string) - `location` (optional): { latitude: number, longitude: number } - `radius` (optional): number (meters, max 50000) - Returns: array of places with names, addresses, locations 4. `maps_place_details` - Get detailed information about a place - Input: `place_id` (string) - Returns: name, address, contact info, ratings, reviews, opening hours 5. `maps_distance_matrix` - Calculate distances and times between points - Inputs: - `origins` (string[]) - `destinations` (string[]) - `mode` (optional): "driving" | "walking" | "bicycling" | "transit" - Returns: distances and durations matrix 6. `maps_elevation` - Get elevation data for locations - Input: `locations` (array of {latitude, longitude}) - Returns: elevation data for each point 7. `maps_directions` - Get directions between points - Inputs: - `origin` (string) - `destination` (string) - `mode` (optional): "driving" | "walking" | "bicycling" | "transit" - Returns: route details with steps, distance, duration ## Setup ### API Key Get a Google Maps API key by following the instructions [here](https://developers.google.com/maps/documentation/javascript/get-api-key#create-api-keys). ### Usage with Claude Desktop Add the following to your `claude_desktop_config.json`: #### Docker ```json { "mcpServers": { "google-maps": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "GOOGLE_MAPS_API_KEY", "mcp/google-maps" ], "env": { "GOOGLE_MAPS_API_KEY": "<YOUR_API_KEY>" } } } } ``` ### NPX ```json { "mcpServers": { "google-maps": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-google-maps" ], "env": { "GOOGLE_MAPS_API_KEY": "<YOUR_API_KEY>" } } } } ``` ## Build Docker build: ```bash docker build -t mcp/google-maps -f src/google-maps/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/google-maps/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/google-maps/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 2762 }
# Knowledge Graph Memory Server A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats. ## Core Concepts ### Entities Entities are the primary nodes in the knowledge graph. Each entity has: - A unique name (identifier) - An entity type (e.g., "person", "organization", "event") - A list of observations Example: ```json { "name": "John_Smith", "entityType": "person", "observations": ["Speaks fluent Spanish"] } ``` ### Relations Relations define directed connections between entities. They are always stored in active voice and describe how entities interact or relate to each other. Example: ```json { "from": "John_Smith", "to": "Anthropic", "relationType": "works_at" } ``` ### Observations Observations are discrete pieces of information about an entity. They are: - Stored as strings - Attached to specific entities - Can be added or removed independently - Should be atomic (one fact per observation) Example: ```json { "entityName": "John_Smith", "observations": [ "Speaks fluent Spanish", "Graduated in 2019", "Prefers morning meetings" ] } ``` ## API ### Tools - **create_entities** - Create multiple new entities in the knowledge graph - Input: `entities` (array of objects) - Each object contains: - `name` (string): Entity identifier - `entityType` (string): Type classification - `observations` (string[]): Associated observations - Ignores entities with existing names - **create_relations** - Create multiple new relations between entities - Input: `relations` (array of objects) - Each object contains: - `from` (string): Source entity name - `to` (string): Target entity name - `relationType` (string): Relationship type in active voice - Skips duplicate relations - **add_observations** - Add new observations to existing entities - Input: `observations` (array of objects) - Each object contains: - `entityName` (string): Target entity - `contents` (string[]): New observations to add - Returns added observations per entity - Fails if entity doesn't exist - **delete_entities** - Remove entities and their relations - Input: `entityNames` (string[]) - Cascading deletion of associated relations - Silent operation if entity doesn't exist - **delete_observations** - Remove specific observations from entities - Input: `deletions` (array of objects) - Each object contains: - `entityName` (string): Target entity - `observations` (string[]): Observations to remove - Silent operation if observation doesn't exist - **delete_relations** - Remove specific relations from the graph - Input: `relations` (array of objects) - Each object contains: - `from` (string): Source entity name - `to` (string): Target entity name - `relationType` (string): Relationship type - Silent operation if relation doesn't exist - **read_graph** - Read the entire knowledge graph - No input required - Returns complete graph structure with all entities and relations - **search_nodes** - Search for nodes based on query - Input: `query` (string) - Searches across: - Entity names - Entity types - Observation content - Returns matching entities and their relations - **open_nodes** - Retrieve specific nodes by name - Input: `names` (string[]) - Returns: - Requested entities - Relations between requested entities - Silently skips non-existent nodes # Usage with Claude Desktop ### Setup Add this to your claude_desktop_config.json: #### Docker ```json { "mcpServers": { "memory": { "command": "docker", "args": ["run", "-i", "-v", "claude-memory:/app/dist", "--rm", "mcp/memory"] } } } ``` #### NPX ```json { "mcpServers": { "memory": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-memory" ] } } } ``` #### NPX with custom setting The server can be configured using the following environment variables: ```json { "mcpServers": { "memory": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-memory" ], "env": { "MEMORY_FILE_PATH": "/path/to/custom/memory.json" } } } } ``` - `MEMORY_FILE_PATH`: Path to the memory storage JSON file (default: `memory.json` in the server directory) ### System Prompt The prompt for utilizing memory depends on the use case. Changing the prompt will help the model determine the frequency and types of memories created. Here is an example prompt for chat personalization. You could use this prompt in the "Custom Instructions" field of a [Claude.ai Project](https://www.anthropic.com/news/projects). ``` Follow these steps for each interaction: 1. User Identification: - You should assume that you are interacting with default_user - If you have not identified default_user, proactively try to do so. 2. Memory Retrieval: - Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph - Always refer to your knowledge graph as your "memory" 3. Memory - While conversing with the user, be attentive to any new information that falls into these categories: a) Basic Identity (age, gender, location, job title, education level, etc.) b) Behaviors (interests, habits, etc.) c) Preferences (communication style, preferred language, etc.) d) Goals (goals, targets, aspirations, etc.) e) Relationships (personal and professional relationships up to 3 degrees of separation) 4. Memory Update: - If any new information was gathered during the interaction, update your memory as follows: a) Create entities for recurring organizations, people, and significant events b) Connect them to the current entities using relations b) Store facts about them as observations ``` ## Building Docker: ```sh docker build -t mcp/memory -f src/memory/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/memory/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/memory/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 6356 }
# PostgreSQL A Model Context Protocol server that provides read-only access to PostgreSQL databases. This server enables LLMs to inspect database schemas and execute read-only queries. ## Components ### Tools - **query** - Execute read-only SQL queries against the connected database - Input: `sql` (string): The SQL query to execute - All queries are executed within a READ ONLY transaction ### Resources The server provides schema information for each table in the database: - **Table Schemas** (`postgres://<host>/<table>/schema`) - JSON schema information for each table - Includes column names and data types - Automatically discovered from database metadata ## Usage with Claude Desktop To use this server with the Claude Desktop app, add the following configuration to the "mcpServers" section of your `claude_desktop_config.json`: ### Docker * when running docker on macos, use host.docker.internal if the server is running on the host network (eg localhost) * username/password can be added to the postgresql url with `postgresql://user:password@host:port/db-name` ```json { "mcpServers": { "postgres": { "command": "docker", "args": [ "run", "-i", "--rm", "mcp/postgres", "postgresql://host.docker.internal:5432/mydb"] } } } ``` ### NPX ```json { "mcpServers": { "postgres": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb" ] } } } ``` Replace `/mydb` with your database name. ## Building Docker: ```sh docker build -t mcp/postgres -f src/postgres/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/postgres/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/postgres/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 1947 }
# Puppeteer A Model Context Protocol server that provides browser automation capabilities using Puppeteer. This server enables LLMs to interact with web pages, take screenshots, and execute JavaScript in a real browser environment. ## Components ### Tools - **puppeteer_navigate** - Navigate to any URL in the browser - Input: `url` (string) - **puppeteer_screenshot** - Capture screenshots of the entire page or specific elements - Inputs: - `name` (string, required): Name for the screenshot - `selector` (string, optional): CSS selector for element to screenshot - `width` (number, optional, default: 800): Screenshot width - `height` (number, optional, default: 600): Screenshot height - **puppeteer_click** - Click elements on the page - Input: `selector` (string): CSS selector for element to click - **puppeteer_hover** - Hover elements on the page - Input: `selector` (string): CSS selector for element to hover - **puppeteer_fill** - Fill out input fields - Inputs: - `selector` (string): CSS selector for input field - `value` (string): Value to fill - **puppeteer_select** - Select an element with SELECT tag - Inputs: - `selector` (string): CSS selector for element to select - `value` (string): Value to select - **puppeteer_evaluate** - Execute JavaScript in the browser console - Input: `script` (string): JavaScript code to execute ### Resources The server provides access to two types of resources: 1. **Console Logs** (`console://logs`) - Browser console output in text format - Includes all console messages from the browser 2. **Screenshots** (`screenshot://<name>`) - PNG images of captured screenshots - Accessible via the screenshot name specified during capture ## Key Features - Browser automation - Console log monitoring - Screenshot capabilities - JavaScript execution - Basic web interaction (navigation, clicking, form filling) ## Configuration to use Puppeteer Server Here's the Claude Desktop configuration to use the Puppeter server: ### Docker **NOTE** The docker implementation will use headless chromium, where as the NPX version will open a browser window. ```json { "mcpServers": { "puppeteer": { "command": "docker", "args": ["run", "-i", "--rm", "--init", "-e", "DOCKER_CONTAINER=true", "mcp/puppeteer"] } } } ``` ### NPX ```json { "mcpServers": { "puppeteer": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-puppeteer"] } } } ``` ## Build Docker build: ```bash docker build -t mcp/puppeteer -f src/puppeteer/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/puppeteer/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/puppeteer/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 2889 }
# Redis A Model Context Protocol server that provides access to Redis databases. This server enables LLMs to interact with Redis key-value stores through a set of standardized tools. ## Components ### Tools - **set** - Set a Redis key-value pair with optional expiration - Input: - `key` (string): Redis key - `value` (string): Value to store - `expireSeconds` (number, optional): Expiration time in seconds - **get** - Get value by key from Redis - Input: `key` (string): Redis key to retrieve - **delete** - Delete one or more keys from Redis - Input: `key` (string | string[]): Key or array of keys to delete - **list** - List Redis keys matching a pattern - Input: `pattern` (string, optional): Pattern to match keys (default: *) ## Usage with Claude Desktop To use this server with the Claude Desktop app, add the following configuration to the "mcpServers" section of your `claude_desktop_config.json`: ### Docker * when running docker on macos, use host.docker.internal if the server is running on the host network (eg localhost) * Redis URL can be specified as an argument, defaults to "redis://localhost:6379" ```json { "mcpServers": { "redis": { "command": "docker", "args": [ "run", "-i", "--rm", "mcp/redis", "redis://host.docker.internal:6379"] } } } ``` ### NPX ```json { "mcpServers": { "redis": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-redis", "redis://localhost:6379" ] } } } ``` ## Building Docker: ```sh docker build -t mcp/redis -f src/redis/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/redis/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/redis/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 1933 }
# mcp-server-sentry: A Sentry MCP server ## Overview A Model Context Protocol server for retrieving and analyzing issues from Sentry.io. This server provides tools to inspect error reports, stacktraces, and other debugging information from your Sentry account. ### Tools 1. `get_sentry_issue` - Retrieve and analyze a Sentry issue by ID or URL - Input: - `issue_id_or_url` (string): Sentry issue ID or URL to analyze - Returns: Issue details including: - Title - Issue ID - Status - Level - First seen timestamp - Last seen timestamp - Event count - Full stacktrace ### Prompts 1. `sentry-issue` - Retrieve issue details from Sentry - Input: - `issue_id_or_url` (string): Sentry issue ID or URL - Returns: Formatted issue details as conversation context ## Installation ### Using uv (recommended) When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will use [`uvx`](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-sentry*. ### Using PIP Alternatively you can install `mcp-server-sentry` via pip: ``` pip install mcp-server-sentry ``` After installation, you can run it as a script using: ``` python -m mcp_server_sentry ``` ## Configuration ### Usage with Claude Desktop Add this to your `claude_desktop_config.json`: <details> <summary>Using uvx</summary> ```json "mcpServers": { "sentry": { "command": "uvx", "args": ["mcp-server-sentry", "--auth-token", "YOUR_SENTRY_TOKEN"] } } ``` </details> <details> <details> <summary>Using docker</summary> ```json "mcpServers": { "sentry": { "command": "docker", "args": ["run", "-i", "--rm", "mcp/sentry", "--auth-token", "YOUR_SENTRY_TOKEN"] } } ``` </details> <details> <summary>Using pip installation</summary> ```json "mcpServers": { "sentry": { "command": "python", "args": ["-m", "mcp_server_sentry", "--auth-token", "YOUR_SENTRY_TOKEN"] } } ``` </details> ### Usage with [Zed](https://github.com/zed-industries/zed) Add to your Zed settings.json: <details> <summary>Using uvx</summary> ```json "context_servers": [ "mcp-server-sentry": { "command": { "path": "uvx", "args": ["mcp-server-sentry", "--auth-token", "YOUR_SENTRY_TOKEN"] } } ], ``` </details> <details> <summary>Using pip installation</summary> ```json "context_servers": { "mcp-server-sentry": { "command": "python", "args": ["-m", "mcp_server_sentry", "--auth-token", "YOUR_SENTRY_TOKEN"] } }, ``` </details> ## Debugging You can use the MCP inspector to debug the server. For uvx installations: ``` npx @modelcontextprotocol/inspector uvx mcp-server-sentry --auth-token YOUR_SENTRY_TOKEN ``` Or if you've installed the package in a specific directory or are developing on it: ``` cd path/to/servers/src/sentry npx @modelcontextprotocol/inspector uv run mcp-server-sentry --auth-token YOUR_SENTRY_TOKEN ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/sentry/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/sentry/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 3213 }
# Sequential Thinking MCP Server An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process. ## Features - Break down complex problems into manageable steps - Revise and refine thoughts as understanding deepens - Branch into alternative paths of reasoning - Adjust the total number of thoughts dynamically - Generate and verify solution hypotheses ## Tool ### sequential_thinking Facilitates a detailed, step-by-step thinking process for problem-solving and analysis. **Inputs:** - `thought` (string): The current thinking step - `nextThoughtNeeded` (boolean): Whether another thought step is needed - `thoughtNumber` (integer): Current thought number - `totalThoughts` (integer): Estimated total thoughts needed - `isRevision` (boolean, optional): Whether this revises previous thinking - `revisesThought` (integer, optional): Which thought is being reconsidered - `branchFromThought` (integer, optional): Branching point thought number - `branchId` (string, optional): Branch identifier - `needsMoreThoughts` (boolean, optional): If more thoughts are needed ## Usage The Sequential Thinking tool is designed for: - Breaking down complex problems into steps - Planning and design with room for revision - Analysis that might need course correction - Problems where the full scope might not be clear initially - Tasks that need to maintain context over multiple steps - Situations where irrelevant information needs to be filtered out ## Configuration ### Usage with Claude Desktop Add this to your `claude_desktop_config.json`: #### npx ```json { "mcpServers": { "sequential-thinking": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-sequential-thinking" ] } } } ``` #### docker ```json { "mcpServers": { "sequentialthinking": { "command": "docker", "args": [ "run", "--rm", "-i", "mcp/sequentialthinking" ] } } } ``` ## Building Docker: ```bash docker build -t mcp/sequentialthinking -f src/sequentialthinking/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/sequentialthinking/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/sequentialthinking/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 2407 }
# Slack MCP Server MCP Server for the Slack API, enabling Claude to interact with Slack workspaces. ## Tools 1. `slack_list_channels` - List public channels in the workspace - Optional inputs: - `limit` (number, default: 100, max: 200): Maximum number of channels to return - `cursor` (string): Pagination cursor for next page - Returns: List of channels with their IDs and information 2. `slack_post_message` - Post a new message to a Slack channel - Required inputs: - `channel_id` (string): The ID of the channel to post to - `text` (string): The message text to post - Returns: Message posting confirmation and timestamp 3. `slack_reply_to_thread` - Reply to a specific message thread - Required inputs: - `channel_id` (string): The channel containing the thread - `thread_ts` (string): Timestamp of the parent message - `text` (string): The reply text - Returns: Reply confirmation and timestamp 4. `slack_add_reaction` - Add an emoji reaction to a message - Required inputs: - `channel_id` (string): The channel containing the message - `timestamp` (string): Message timestamp to react to - `reaction` (string): Emoji name without colons - Returns: Reaction confirmation 5. `slack_get_channel_history` - Get recent messages from a channel - Required inputs: - `channel_id` (string): The channel ID - Optional inputs: - `limit` (number, default: 10): Number of messages to retrieve - Returns: List of messages with their content and metadata 6. `slack_get_thread_replies` - Get all replies in a message thread - Required inputs: - `channel_id` (string): The channel containing the thread - `thread_ts` (string): Timestamp of the parent message - Returns: List of replies with their content and metadata 7. `slack_get_users` - Get list of workspace users with basic profile information - Optional inputs: - `cursor` (string): Pagination cursor for next page - `limit` (number, default: 100, max: 200): Maximum users to return - Returns: List of users with their basic profiles 8. `slack_get_user_profile` - Get detailed profile information for a specific user - Required inputs: - `user_id` (string): The user's ID - Returns: Detailed user profile information ## Setup 1. Create a Slack App: - Visit the [Slack Apps page](https://api.slack.com/apps) - Click "Create New App" - Choose "From scratch" - Name your app and select your workspace 2. Configure Bot Token Scopes: Navigate to "OAuth & Permissions" and add these scopes: - `channels:history` - View messages and other content in public channels - `channels:read` - View basic channel information - `chat:write` - Send messages as the app - `reactions:write` - Add emoji reactions to messages - `users:read` - View users and their basic information 4. Install App to Workspace: - Click "Install to Workspace" and authorize the app - Save the "Bot User OAuth Token" that starts with `xoxb-` 5. Get your Team ID (starts with a `T`) by following [this guidance](https://slack.com/help/articles/221769328-Locate-your-Slack-URL-or-ID#find-your-workspace-or-org-id) ### Usage with Claude Desktop Add the following to your `claude_desktop_config.json`: #### npx ```json { "mcpServers": { "slack": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-slack" ], "env": { "SLACK_BOT_TOKEN": "xoxb-your-bot-token", "SLACK_TEAM_ID": "T01234567" } } } } ``` #### docker ```json { "mcpServers": { "slack": { "command": "docker", "args": [ "run", "-i", "--rm", "-e", "SLACK_BOT_TOKEN", "-e", "SLACK_TEAM_ID", "mcp/slack" ], "env": { "SLACK_BOT_TOKEN": "xoxb-your-bot-token", "SLACK_TEAM_ID": "T01234567" } } } } ``` ### Troubleshooting If you encounter permission errors, verify that: 1. All required scopes are added to your Slack app 2. The app is properly installed to your workspace 3. The tokens and workspace ID are correctly copied to your configuration 4. The app has been added to the channels it needs to access ## Build Docker build: ```bash docker build -t mcp/slack -f src/slack/Dockerfile . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/slack/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/slack/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 4647 }
# SQLite MCP Server ## Overview A Model Context Protocol (MCP) server implementation that provides database interaction and business intelligence capabilities through SQLite. This server enables running SQL queries, analyzing business data, and automatically generating business insight memos. ## Components ### Resources The server exposes a single dynamic resource: - `memo://insights`: A continuously updated business insights memo that aggregates discovered insights during analysis - Auto-updates as new insights are discovered via the append-insight tool ### Prompts The server provides a demonstration prompt: - `mcp-demo`: Interactive prompt that guides users through database operations - Required argument: `topic` - The business domain to analyze - Generates appropriate database schemas and sample data - Guides users through analysis and insight generation - Integrates with the business insights memo ### Tools The server offers six core tools: #### Query Tools - `read_query` - Execute SELECT queries to read data from the database - Input: - `query` (string): The SELECT SQL query to execute - Returns: Query results as array of objects - `write_query` - Execute INSERT, UPDATE, or DELETE queries - Input: - `query` (string): The SQL modification query - Returns: `{ affected_rows: number }` - `create_table` - Create new tables in the database - Input: - `query` (string): CREATE TABLE SQL statement - Returns: Confirmation of table creation #### Schema Tools - `list_tables` - Get a list of all tables in the database - No input required - Returns: Array of table names - `describe-table` - View schema information for a specific table - Input: - `table_name` (string): Name of table to describe - Returns: Array of column definitions with names and types #### Analysis Tools - `append_insight` - Add new business insights to the memo resource - Input: - `insight` (string): Business insight discovered from data analysis - Returns: Confirmation of insight addition - Triggers update of memo://insights resource ## Usage with Claude Desktop ### uv ```bash # Add the server to your claude_desktop_config.json "mcpServers": { "sqlite": { "command": "uv", "args": [ "--directory", "parent_of_servers_repo/servers/src/sqlite", "run", "mcp-server-sqlite", "--db-path", "~/test.db" ] } } ``` ### Docker ```json # Add the server to your claude_desktop_config.json "mcpServers": { "sqlite": { "command": "docker", "args": [ "run", "--rm", "-i", "-v", "mcp-test:/mcp", "mcp/sqlite", "--db-path", "/mcp/test.db" ] } } ``` ## Building Docker: ```bash docker build -t mcp/sqlite . ``` ## License This MCP server is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/sqlite/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/sqlite/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 3080 }
# Time MCP Server A Model Context Protocol server that provides time and timezone conversion capabilities. This server enables LLMs to get current time information and perform timezone conversions using IANA timezone names, with automatic system timezone detection. ### Available Tools - `get_current_time` - Get current time in a specific timezone or system timezone. - Required arguments: - `timezone` (string): IANA timezone name (e.g., 'America/New_York', 'Europe/London') - `convert_time` - Convert time between timezones. - Required arguments: - `source_timezone` (string): Source IANA timezone name - `time` (string): Time in 24-hour format (HH:MM) - `target_timezone` (string): Target IANA timezone name ## Installation ### Using uv (recommended) When using [`uv`](https://docs.astral.sh/uv/) no specific installation is needed. We will use [`uvx`](https://docs.astral.sh/uv/guides/tools/) to directly run *mcp-server-time*. ### Using PIP Alternatively you can install `mcp-server-time` via pip: ```bash pip install mcp-server-time ``` After installation, you can run it as a script using: ```bash python -m mcp_server_time ``` ## Configuration ### Configure for Claude.app Add to your Claude settings: <details> <summary>Using uvx</summary> ```json "mcpServers": { "time": { "command": "uvx", "args": ["mcp-server-time"] } } ``` </details> <details> <summary>Using docker</summary> ```json "mcpServers": { "time": { "command": "docker", "args": ["run", "-i", "--rm", "mcp/time"] } } ``` </details> <details> <summary>Using pip installation</summary> ```json "mcpServers": { "time": { "command": "python", "args": ["-m", "mcp_server_time"] } } ``` </details> ### Configure for Zed Add to your Zed settings.json: <details> <summary>Using uvx</summary> ```json "context_servers": [ "mcp-server-time": { "command": "uvx", "args": ["mcp-server-time"] } ], ``` </details> <details> <summary>Using pip installation</summary> ```json "context_servers": { "mcp-server-time": { "command": "python", "args": ["-m", "mcp_server_time"] } }, ``` </details> ### Customization - System Timezone By default, the server automatically detects your system's timezone. You can override this by adding the argument `--local-timezone` to the `args` list in the configuration. Example: ```json { "command": "python", "args": ["-m", "mcp_server_time", "--local-timezone=America/New_York"] } ``` ## Example Interactions 1. Get current time: ```json { "name": "get_current_time", "arguments": { "timezone": "Europe/Warsaw" } } ``` Response: ```json { "timezone": "Europe/Warsaw", "datetime": "2024-01-01T13:00:00+01:00", "is_dst": false } ``` 2. Convert time between timezones: ```json { "name": "convert_time", "arguments": { "source_timezone": "America/New_York", "time": "16:30", "target_timezone": "Asia/Tokyo" } } ``` Response: ```json { "source": { "timezone": "America/New_York", "datetime": "2024-01-01T12:30:00-05:00", "is_dst": false }, "target": { "timezone": "Asia/Tokyo", "datetime": "2024-01-01T12:30:00+09:00", "is_dst": false }, "time_difference": "+13.0h", } ``` ## Debugging You can use the MCP inspector to debug the server. For uvx installations: ```bash npx @modelcontextprotocol/inspector uvx mcp-server-time ``` Or if you've installed the package in a specific directory or are developing on it: ```bash cd path/to/servers/src/time npx @modelcontextprotocol/inspector uv run mcp-server-time ``` ## Examples of Questions for Claude 1. "What time is it now?" (will use system timezone) 2. "What time is it in Tokyo?" 3. "When it's 4 PM in New York, what time is it in London?" 4. "Convert 9:30 AM Tokyo time to New York time" ## Build Docker build: ```bash cd src/time docker build -t mcp/time . ``` ## Contributing We encourage contributions to help expand and improve mcp-server-time. Whether you want to add new time-related tools, enhance existing functionality, or improve documentation, your input is valuable. For examples of other MCP servers and implementation patterns, see: https://github.com/modelcontextprotocol/servers Pull requests are welcome! Feel free to contribute new ideas, bug fixes, or enhancements to make mcp-server-time even more powerful and useful. ## License mcp-server-time is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License. For more details, please see the LICENSE file in the project repository.
{ "source": "modelcontextprotocol/servers", "title": "src/time/README.md", "url": "https://github.com/modelcontextprotocol/servers/blob/main/src/time/README.md", "date": "2024-11-19T01:10:17", "stars": 10638, "description": "Model Context Protocol Servers", "file_size": 4640 }
# Contributing to Void ### Welcome! 👋 This is the official guide on how to contribute to Void. We want to make it as easy as possible to contribute, so if you have any questions or comments, reach out via email or discord! There are a few ways to contribute: - 💫 Complete items on the [Roadmap](https://github.com/orgs/voideditor/projects/2). - 💡 Make suggestions in our [Discord](https://discord.gg/RSNjgaugJs). - 🪴 Start new Issues - see [Issues](https://github.com/voideditor/void/issues). ### Codebase Guide We highly recommend reading [this](https://github.com/microsoft/vscode/wiki/Source-Code-Organization) article on VSCode's sourcecode organization. <!-- ADD BLOG HERE We wrote a [guide to working in VSCode]. --> Most of Void's code lives in the folder `src/vs/workbench/contrib/void/`. ## Building Void ### a. Build Prerequisites - Mac If you're using a Mac, you need Python and XCode. You probably have these by default. ### b. Build Prerequisites - Windows If you're using a Windows computer, first get [Visual Studio 2022](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community) (recommended) or [VS Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) (not recommended). If you already have both, you might need to run the next few steps on both of them. Go to the "Workloads" tab and select: - `Desktop development with C++` - `Node.js build tools` Go to the "Individual Components" tab and select: - `MSVC v143 - VS 2022 C++ x64/x86 Spectre-mitigated libs (Latest)` - `C++ ATL for latest build tools with Spectre Mitigations` - `C++ MFC for latest build tools with Spectre Mitigations` Finally, click Install. ### c. Build Prerequisites - Linux First, run `npm install -g node-gyp`. Then: - Debian (Ubuntu, etc): `sudo apt-get install build-essential g++ libx11-dev libxkbfile-dev libsecret-1-dev libkrb5-dev python-is-python3`. - Red Hat (Fedora, etc): `sudo dnf install @development-tools gcc gcc-c++ make libsecret-devel krb5-devel libX11-devel libxkbfile-devel`. - Others: see [How to Contribute](https://github.com/microsoft/vscode/wiki/How-to-Contribute). ### d. Building Void To build Void, open `void/` inside VSCode. Then open your terminal and run: 1. `npm install` to install all dependencies. 2. `npm run watchreact` to build Void's browser dependencies like React. (If this doesn't work, try `npm run buildreact`). 3. Build Void. - Press <kbd>Cmd+Shift+B</kbd> (Mac). - Press <kbd>Ctrl+Shift+B</kbd> (Windows/Linux). - This step can take ~5 min. The build is done when you see two check marks. 4. Run Void. - Run `./scripts/code.sh` (Mac/Linux). - Run `./scripts/code.bat` (Windows). 6. Nice-to-knows. - You can always press <kbd>Ctrl+R</kbd> (<kbd>Cmd+R</kbd>) inside the new window to reload and see your new changes. It's faster than <kbd>Ctrl+Shift+P</kbd> and `Reload Window`. - You might want to add the flags `--user-data-dir ./.tmp/user-data --extensions-dir ./.tmp/extensions` to the above run command, which lets you delete the `.tmp` folder to reset any IDE changes you made when testing. #### Building Void from Terminal Alternatively, if you want to build Void from the terminal, instead of pressing <kbd>Cmd+Shift+B</kbd> you can run `npm run watch`. The build is done when you see something like this: ``` [watch-extensions] [00:37:39] Finished compilation extensions with 0 errors after 19303 ms [watch-client ] [00:38:06] Finished compilation with 0 errors after 46248 ms [watch-client ] [00:38:07] Starting compilation... [watch-client ] [00:38:07] Finished compilation with 0 errors after 5 ms ``` #### Common Fixes - Make sure you followed the prerequisite steps. - Make sure you have Node version `20.16.0` (the version in `.nvmrc`)! - If you get `"TypeError: Failed to fetch dynamically imported module"`, make sure all imports end with `.js`. - If you see missing styles, wait a few seconds and then reload. - If you have any questions, feel free to [submit an issue](https://github.com/voideditor/void/issues/new). You can also refer to VSCode's complete [How to Contribute](https://github.com/microsoft/vscode/wiki/How-to-Contribute) page. ## Packaging We don't usually recommend packaging. Instead, you should probably just build. If you're sure you want to package Void into an executable app, make sure you've built first, then run one of the following commands. This will create a folder named `VSCode-darwin-arm64` or similar outside of the void/ repo (see below). Be patient - packaging can take ~25 minutes. ### Mac - `npm run gulp vscode-darwin-arm64` - most common (Apple Silicon) - `npm run gulp vscode-darwin-x64` (Intel) ### Windows - `npm run gulp vscode-win32-x64` - most common - `npm run gulp vscode-win32-ia32` ### Linux - `npm run gulp vscode-linux-x64` - most common - `npm run gulp vscode-linux-arm` - `npm run gulp vscode-linux-ia32` ### Output This will generate a folder outside of `void/`: ```bash workspace/ ├── void/ # Your Void fork └── VSCode-darwin-arm64/ # Generated output ``` ### Distributing Void's maintainers distribute Void on our website and in releases. If you'd like to see the scripts to convert `Mac .app -> .dmg`, `Windows folder -> .exe`, and `Linux folder -> appimage` for distribution, feel free to reach out. ## Pull Request Guidelines - Please submit a pull request once you've made a change. - No need to submit an Issue unless you're creating a new feature that might involve multiple PRs. - Please don't use AI to write your PR 🙂 <!-- # Relevant files We keep track of all the files we've changed with Void so it's easy to rebase: Edit: far too many changes to track... this is old - README.md - CONTRIBUTING.md - VOID_USEFUL_LINKS.md - product.json - package.json - src/vs/workbench/api/common/{extHost.api.impl.ts | extHostApiCommands.ts} - src/vs/workbench/workbench.common.main.ts - src/vs/workbench/contrib/void/\* - extensions/void/\* - .github/\* - .vscode/settings/\* - .eslintrc.json - build/hygiene.js - build/lib/i18n.resources.json - build/npm/dirs.js - vscode.proposed.editorInsets.d.ts - not modified, but code copied -->
{ "source": "voideditor/void", "title": "CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 6209 }
# Welcome to Void. <div align="center"> <img src="./src/vs/workbench/browser/parts/editor/media/slice_of_void.png" alt="Void Welcome" width="300" height="300" /> </div> Void is the open-source Cursor alternative. This repo contains the full sourcecode for Void. We are currently in [open beta](https://voideditor.com/email) for Discord members (see the `announcements` channel), with a waitlist for our official release. If you're new, welcome! - 👋 [Discord](https://discord.gg/RSNjgaugJs) - 🔨 [Contribute](https://github.com/voideditor/void/blob/main/CONTRIBUTING.md) - 🚙 [Roadmap](https://github.com/orgs/voideditor/projects/2) - 📝 [Changelog](https://voideditor.com/changelog) ## Contributing 1. Feel free to attend a weekly meeting in our Discord channel if you'd like to contribute! 2. To get started working on Void, see [Contributing](https://github.com/voideditor/void/blob/main/CONTRIBUTING.md). 3. We're open to collaborations and suggestions of all types - just reach out. ## Reference Void is a fork of the [vscode](https://github.com/microsoft/vscode) repository. For some useful links on VSCode, see [`VOID_USEFUL_LINKS.md`](https://github.com/voideditor/void/blob/main/VOID_USEFUL_LINKS.md). ## Support Feel free to reach out in our Discord or contact us via email: [email protected].
{ "source": "voideditor/void", "title": "README.md", "url": "https://github.com/voideditor/void/blob/main/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1331 }
# Useful links The Void team put together this list of links to get up and running with VSCode's sourcecode. We hope it's helpful! ## Contributing - [How VSCode's sourcecode is organized](https://github.com/microsoft/vscode/wiki/Source-Code-Organization) - this explains where the entry point files are, what `browser/` and `common/` mean, etc. This is the most important read on this whole list! We recommend reading the whole thing. - [Built-in VSCode styles](https://code.visualstudio.com/api/references/theme-color) - CSS variables that are built into VSCode. Use `var(--vscode-{theme but replacing . with -})`. You can also see their [Webview theming guide](https://code.visualstudio.com/api/extension-guides/webview#theming-webview-content). ## Beginners / Getting started - [VSCode UI guide](https://code.visualstudio.com/docs/getstarted/userinterface) - covers auxbar, panels, etc. - [UX guide](https://code.visualstudio.com/api/ux-guidelines/overview) - covers Containers, Views, Items, etc. ## Misc - [Every command](https://code.visualstudio.com/api/references/commands) built-in to VSCode - not used often, but here for reference. ## VSCode's Extension API Void is no longer an extension, so these links are no longer required, but they might be useful if we ever build an extension again. - [Files you need in an extension](https://code.visualstudio.com/api/get-started/extension-anatomy). - [An extension's `package.json` schema](https://code.visualstudio.com/api/references/extension-manifest). - ["Contributes" Guide](https://code.visualstudio.com/api/references/contribution-points) - the `"contributes"` part of `package.json` is how an extension mounts. - [The Full VSCode Extension API](https://code.visualstudio.com/api/references/vscode-api) - look on the right side for organization. The [bottom](https://code.visualstudio.com/api/references/vscode-api#api-patterns) of the page is easy to miss but is useful - cancellation tokens, events, disposables. - [Activation events](https://code.visualstudio.com/api/references/activation-events) you can define in `package.json` (not the most useful).
{ "source": "voideditor/void", "title": "VOID_USEFUL_LINKS.md", "url": "https://github.com/voideditor/void/blob/main/VOID_USEFUL_LINKS.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 2136 }
# Code - OSS Development Container [![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/microsoft/vscode) This repository includes configuration for a development container for working with Code - OSS in a local container or using [GitHub Codespaces](https://github.com/features/codespaces). > **Tip:** The default VNC password is `vscode`. The VNC server runs on port `5901` and a web client is available on port `6080`. ## Quick start - local If you already have VS Code and Docker installed, you can click the badge above or [here](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/microsoft/vscode) to get started. Clicking these links will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use. 1. Install Docker Desktop or Docker for Linux on your local machine. (See [docs](https://aka.ms/vscode-remote/containers/getting-started) for additional details.) 2. **Important**: Docker needs at least **4 Cores and 8 GB of RAM** to run a full build with **9 GB of RAM** being recommended. If you are on macOS, or are using the old Hyper-V engine for Windows, update these values for Docker Desktop by right-clicking on the Docker status bar item and going to **Preferences/Settings > Resources > Advanced**. > **Note:** The [Resource Monitor](https://marketplace.visualstudio.com/items?itemName=mutantdino.resourcemonitor) extension is included in the container so you can keep an eye on CPU/Memory in the status bar. 3. Install [Visual Studio Code Stable](https://code.visualstudio.com/) or [Insiders](https://code.visualstudio.com/insiders/) and the [Dev Containers](https://aka.ms/vscode-remote/download/containers) extension. ![Image of Dev Containers extension](https://microsoft.github.io/vscode-remote-release/images/dev-containers-extn.png) > **Note:** The Dev Containers extension requires the Visual Studio Code distribution of Code - OSS. See the [FAQ](https://aka.ms/vscode-remote/faq/license) for details. 4. Press <kbd>Ctrl/Cmd</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> or <kbd>F1</kbd> and select **Dev Containers: Clone Repository in Container Volume...**. > **Tip:** While you can use your local source tree instead, operations like `npm i` can be slow on macOS or when using the Hyper-V engine on Windows. We recommend using the WSL filesystem on Windows or the "clone repository in container" approach on Windows and macOS instead since it uses "named volume" rather than the local filesystem. 5. Type `https://github.com/microsoft/vscode` (or a branch or PR URL) in the input box and press <kbd>Enter</kbd>. 6. After the container is running: 1. If you have the `DISPLAY` or `WAYLAND_DISPLAY` environment variables set locally (or in WSL on Windows), desktop apps in the container will be shown in local windows. 2. If these are not set, open a web browser and go to [http://localhost:6080](http://localhost:6080), or use a [VNC Viewer][def] to connect to `localhost:5901` and enter `vscode` as the password. Anything you start in VS Code, or the integrated terminal, will appear here. Next: **[Try it out!](#try-it)** ## Quick start - GitHub Codespaces 1. From the [microsoft/vscode GitHub repository](https://github.com/microsoft/vscode), click on the **Code** dropdown, select **Open with Codespaces**, and then click on **New codespace**. If prompted, select the **Standard** machine size (which is also the default). > **Note:** You will not see these options within GitHub if you are not in the Codespaces beta. 2. After the codespace is up and running in your browser, press <kbd>Ctrl/Cmd</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> or <kbd>F1</kbd> and select **Ports: Focus on Ports View**. 3. You should see **VNC web client (6080)** under in the list of ports. Select the line and click on the globe icon to open it in a browser tab. > **Tip:** If you do not see the port, <kbd>Ctrl/Cmd</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> or <kbd>F1</kbd>, select **Forward a Port** and enter port `6080`. 4. In the new tab, you should see noVNC. Click **Connect** and enter `vscode` as the password. Anything you start in VS Code, or the integrated terminal, will appear here. Next: **[Try it out!](#try-it)** ### Using VS Code with GitHub Codespaces You may see improved VNC responsiveness when accessing a codespace from VS Code client since you can use a [VNC Viewer][def]. Here's how to do it. 1. Install [Visual Studio Code Stable](https://code.visualstudio.com/) or [Insiders](https://code.visualstudio.com/insiders/) and the [GitHub Codespaces extension](https://marketplace.visualstudio.com/items?itemName=GitHub.codespaces). > **Note:** The GitHub Codespaces extension requires the Visual Studio Code distribution of Code - OSS. 2. After the VS Code is up and running, press <kbd>Ctrl/Cmd</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> or <kbd>F1</kbd>, choose **Codespaces: Create New Codespace**, and use the following settings: - `microsoft/vscode` for the repository. - Select any branch (e.g. **main**) - you can select a different one later. - Choose **Standard** (4-core, 8GB) as the size. 3. After you have connected to the codespace, you can use a [VNC Viewer][def] to connect to `localhost:5901` and enter `vscode` as the password. > **Tip:** You may also need change your VNC client's **Picture Quality** setting to **High** to get a full color desktop. 4. Anything you start in VS Code, or the integrated terminal, will appear here. Next: **[Try it out!](#try-it)** ## Try it This container uses the [Fluxbox](http://fluxbox.org/) window manager to keep things lean. **Right-click on the desktop** to see menu options. It works with GNOME and GTK applications, so other tools can be installed if needed. > **Note:** You can also set the resolution from the command line by typing `set-resolution`. To start working with Code - OSS, follow these steps: 1. In your local VS Code client, open a terminal (<kbd>Ctrl/Cmd</kbd> + <kbd>Shift</kbd> + <kbd>\`</kbd>) and type the following commands: ```bash npm i bash scripts/code.sh ``` 2. After the build is complete, open a web browser or a [VNC Viewer][def] to connect to the desktop environment as described in the quick start and enter `vscode` as the password. 3. You should now see Code - OSS! Next, let's try debugging. 1. Shut down Code - OSS by clicking the box in the upper right corner of the Code - OSS window through your browser or VNC viewer. 2. Go to your local VS Code client, and use the **Run / Debug** view to launch the **VS Code** configuration. (Typically the default, so you can likely just press <kbd>F5</kbd>). > **Note:** If launching times out, you can increase the value of `timeout` in the "VS Code", "Attach Main Process", "Attach Extension Host", and "Attach to Shared Process" configurations in [launch.json](../../.vscode/launch.json). However, running `./scripts/code.sh` first will set up Electron which will usually solve timeout issues. 3. After a bit, Code - OSS will appear with the debugger attached! Enjoy! ### Notes The container comes with VS Code Insiders installed. To run it from an Integrated Terminal use `VSCODE_IPC_HOOK_CLI= /usr/bin/code-insiders .`. [def]: https://www.realvnc.com/en/connect/download/viewer/
{ "source": "voideditor/void", "title": ".devcontainer/README.md", "url": "https://github.com/voideditor/void/blob/main/.devcontainer/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 7602 }
# Setup 0. Clone, and then run `git submodule update --init --recursive` 1. Get the extensions: [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer) and [CodeLLDB](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb) 2. Ensure your workspace is set to the `launcher` folder being the root. ## Building the CLI on Windows For the moment, we require OpenSSL on Windows, where it is not usually installed by default. To install it: 1. Install (clone) vcpkg [using their instructions](https://github.com/Microsoft/vcpkg#quick-start-windows) 1. Add the location of the `vcpkg` directory to your system or user PATH. 1. Run`vcpkg install openssl:x64-windows-static-md` (after restarting your terminal for PATH changes to apply) 1. You should be able to then `cargo build` successfully OpenSSL is needed for the key exchange we do when forwarding Basis tunnels. When all interested Basis clients support ED25519, we would be able to solely use libsodium. At the time of writing however, there is [no active development](https://chromestatus.com/feature/4913922408710144) on this in Chromium. # Debug 1. You can use the Debug tasks already configured to run the launcher.
{ "source": "voideditor/void", "title": "cli/CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/cli/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1230 }
# VSCode Tests ## Contents This folder contains the various test runners for VSCode. Please refer to the documentation within for how to run them: * `unit`: our suite of unit tests ([README](unit/README.md)) * `integration`: our suite of API tests ([README](integration/browser/README.md)) * `smoke`: our suite of automated UI tests ([README](smoke/README.md))
{ "source": "voideditor/void", "title": "test/README.md", "url": "https://github.com/voideditor/void/blob/main/test/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 363 }
# monaco-editor-core > This npm module is a building block for the [monaco-editor](https://www.npmjs.com/package/monaco-editor) npm module and unless you are doing something special (e.g. authoring a monaco editor language that can be shipped and consumed independently), it is best to consume the [monaco-editor](https://www.npmjs.com/package/monaco-editor) module that contains this module and adds languages supports. The Monaco Editor is the code editor that powers [VS Code](https://github.com/microsoft/vscode). Here is a good page describing some [editor features](https://code.visualstudio.com/docs/editor/editingevolved). This npm module contains the core editor functionality, as it comes from the [vscode repository](https://github.com/microsoft/vscode). ## License [MIT](https://github.com/microsoft/vscode/blob/main/LICENSE.txt)
{ "source": "voideditor/void", "title": "build/monaco/README-npm.md", "url": "https://github.com/voideditor/void/blob/main/build/monaco/README-npm.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 846 }
# Steps to publish a new version of monaco-editor-core ## Generate monaco.d.ts * The `monaco.d.ts` is now automatically generated when running `gulp watch` ## Bump version * increase version in `build/monaco/package.json` ## Generate npm contents for monaco-editor-core * Be sure to have all changes committed **and pushed to the remote** * (the generated files contain the HEAD sha and that should be available on the remote) * run gulp editor-distro ## Publish * `cd out-monaco-editor-core` * `npm publish`
{ "source": "voideditor/void", "title": "build/monaco/README.md", "url": "https://github.com/voideditor/void/blob/main/build/monaco/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 516 }
## Setup - Clone [microsoft/vscode](https://github.com/microsoft/vscode) - Run `npm i` at `/`, this will install - Dependencies for `/extension/css-language-features/` - Dependencies for `/extension/css-language-features/server/` - devDependencies such as `gulp` - Open `/extensions/css-language-features/` as the workspace in VS Code - In `/extensions/css-language-features/` run `npm run compile`(or `npm run watch`) to build the client and server - Run the [`Launch Extension`](https://github.com/microsoft/vscode/blob/master/extensions/css-language-features/.vscode/launch.json) debug target in the Debug View. This will: - Launch a new VS Code instance with the `css-language-features` extension loaded - Open a `.css` file to activate the extension. The extension will start the CSS language server process. - Add `"css.trace.server": "verbose"` to the settings to observe the communication between client and server in the `CSS Language Server` output. - Debug the extension and the language server client by setting breakpoints in`css-language-features/client/` - Debug the language server process by using `Attach to Node Process` command in the VS Code window opened on `css-language-features`. - Pick the process that contains `cssServerMain` in the command line. Hover over `code-insiders` resp `code` processes to see the full process command line. - Set breakpoints in `css-language-features/server/` - Run `Reload Window` command in the launched instance to reload the extension ## Contribute to vscode-css-languageservice [microsoft/vscode-css-languageservice](https://github.com/microsoft/vscode-css-languageservice) contains the language smarts for CSS/SCSS/Less. This extension wraps the css language service into a Language Server for VS Code. If you want to fix CSS/SCSS/Less issues or make improvements, you should make changes at [microsoft/vscode-css-languageservice](https://github.com/microsoft/vscode-css-languageservice). However, within this extension, you can run a development version of `vscode-css-languageservice` to debug code or test language features interactively: #### Linking `vscode-css-languageservice` in `css-language-features/server/` - Clone [microsoft/vscode-css-languageservice](https://github.com/microsoft/vscode-css-languageservice) - Run `npm i` in `vscode-css-languageservice` - Run `npm link` in `vscode-css-languageservice`. This will compile and link `vscode-css-languageservice` - In `css-language-features/server/`, run `npm link vscode-css-languageservice` #### Testing the development version of `vscode-css-languageservice` - Open both `vscode-css-languageservice` and this extension in a single workspace with [multi-root workspace](https://code.visualstudio.com/docs/editor/multi-root-workspaces) feature - Run `npm run watch` in `vscode-css-languageservice` to recompile the extension whenever it changes - Run `npm run watch` at `css-language-features/server/` to recompile this extension with the linked version of `vscode-css-languageservice` - Make some changes in `vscode-css-languageservice` - Now when you run `Launch Extension` debug target, the launched instance will use your development version of `vscode-css-languageservice`. You can interactively test the language features.
{ "source": "voideditor/void", "title": "extensions/css-language-features/CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/extensions/css-language-features/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 3270 }
# Language Features for CSS, SCSS, and LESS files **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [CSS, SCSS and Less in VS Code](https://code.visualstudio.com/docs/languages/css) to learn about the features of this extension. Please read the [CONTRIBUTING.md](https://github.com/microsoft/vscode/blob/master/extensions/css-language-features/CONTRIBUTING.md) file to learn how to contribute to this extension.
{ "source": "voideditor/void", "title": "extensions/css-language-features/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/css-language-features/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 484 }
## How to build and run from source? Read the basics about extension authoring from [Extending Visual Studio Code](https://code.visualstudio.com/docs/extensions/overview) - Read [Build and Run VS Code from Source](https://github.com/microsoft/vscode/wiki/How-to-Contribute#build-and-run-from-source) to get a local dev set up running for VS Code - Open the `extensions/emmet` folder in the vscode repo in VS Code - Press F5 to start debugging ## Running tests Tests for Emmet extension are run as integration tests as part of VS Code. - Read [Build and Run VS Code from Source](https://github.com/microsoft/vscode/wiki/How-to-Contribute#build-and-run-from-source) to get a local dev set up running for VS Code - Run `./scripts/test-integration.sh` to run all the integrations tests that include the Emmet tests.
{ "source": "voideditor/void", "title": "extensions/emmet/CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/extensions/emmet/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 816 }
# Emmet integration in Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [Emmet in Visual Studio Code](https://code.visualstudio.com/docs/editor/emmet) to learn about the features of this extension. Please read the [CONTRIBUTING.md](https://github.com/microsoft/vscode/blob/master/extensions/emmet/CONTRIBUTING.md) file to learn how to contribute to this extension.
{ "source": "voideditor/void", "title": "extensions/emmet/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/emmet/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 457 }
# Git static contributions and remote repository picker **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features Git static contributions and remote repository picker. ## API The Git extension exposes an API, reachable by any other extension. 1. Copy `src/api/git-base.d.ts` to your extension's sources; 2. Include `git-base.d.ts` in your extension's compilation. 3. Get a hold of the API with the following snippet: ```ts const gitBaseExtension = vscode.extensions.getExtension<GitBaseExtension>('vscode.git-base').exports; const git = gitBaseExtension.getAPI(1); ```
{ "source": "voideditor/void", "title": "extensions/git-base/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/git-base/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 638 }
# Git integration for Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [Git support in VS Code](https://code.visualstudio.com/docs/editor/versioncontrol#_git-support) to learn about the features of this extension. ## API The Git extension exposes an API, reachable by any other extension. 1. Copy `src/api/git.d.ts` to your extension's sources; 2. Include `git.d.ts` in your extension's compilation. 3. Get a hold of the API with the following snippet: ```ts const gitExtension = vscode.extensions.getExtension<GitExtension>('vscode.git').exports; const git = gitExtension.getAPI(1); ``` **Note:** To ensure that the `vscode.git` extension is activated before your extension, add `extensionDependencies` ([docs](https://code.visualstudio.com/api/references/extension-manifest)) into the `package.json` of your extension: ```json "extensionDependencies": [ "vscode.git" ] ```
{ "source": "voideditor/void", "title": "extensions/git/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/git/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 981 }
# GitHub Authentication for Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension provides support for authenticating to GitHub. It registers the `github` Authentication Provider that can be leveraged by other extensions. This also provides the GitHub authentication used by Settings Sync.
{ "source": "voideditor/void", "title": "extensions/github-authentication/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/github-authentication/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 387 }
# GitHub for Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension provides the following GitHub-related features for VS Code: - `Publish to GitHub` command - `Clone from GitHub` participant to the `Git: Clone` command - GitHub authentication for built-in git commands, controlled via the `github.gitAuthentication` command - Automatic fork creation when attempting to push to a repository without permissions
{ "source": "voideditor/void", "title": "extensions/github/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/github/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 508 }
# Grunt - The JavaScript Task Runner **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension supports running [Grunt](https://gruntjs.com/) tasks defined in a `gruntfile.js` file as [VS Code tasks](https://code.visualstudio.com/docs/editor/tasks). Grunt tasks with the name 'build', 'compile', or 'watch' are treated as build tasks. To run Grunt tasks, use the **Tasks** menu. ## Settings - `grunt.autoDetect` - Enable detecting tasks from `gruntfile.js` files, the default is `on`.
{ "source": "voideditor/void", "title": "extensions/grunt/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/grunt/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 562 }
# Gulp - Automate and enhance your workflow **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension supports running [Gulp](https://gulpjs.com/) tasks defined in a `gulpfile.{js,ts}` file as [VS Code tasks](https://code.visualstudio.com/docs/editor/tasks). Gulp tasks with the name 'build', 'compile', or 'watch' are treated as build tasks. To run Gulp tasks, use the **Tasks** menu. ## Settings - `gulp.autoDetect` - Enable detecting tasks from `gulpfile.{js,ts}` files, the default is `on`.
{ "source": "voideditor/void", "title": "extensions/gulp/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/gulp/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 572 }
## Setup - Clone [microsoft/vscode](https://github.com/microsoft/vscode) - Run `npm i` at `/`, this will install - Dependencies for `/extension/html-language-features/` - Dependencies for `/extension/html-language-features/server/` - devDependencies such as `gulp` - Open `/extensions/html-language-features/` as the workspace in VS Code - In `/extensions/html-language-features/` run `npm run compile`(or `npm run watch`) to build the client and server - Run the [`Launch Extension`](https://github.com/microsoft/vscode/blob/master/extensions/html-language-features/.vscode/launch.json) debug target in the Debug View. This will: - Launch a new VS Code instance with the `html-language-features` extension loaded - Open a `.html` file to activate the extension. The extension will start the HTML language server process. - Add `"html.trace.server": "verbose"` to the settings to observe the communication between client and server in the `HTML Language Server` output. - Debug the extension and the language server client by setting breakpoints in`html-language-features/client/` - Debug the language server process by using `Attach to Node Process` command in the VS Code window opened on `html-language-features`. - Pick the process that contains `htmlServerMain` in the command line. Hover over `code-insiders` resp `code` processes to see the full process command line. - Set breakpoints in `html-language-features/server/` - Run `Reload Window` command in the launched instance to reload the extension ### Contribute to vscode-html-languageservice [microsoft/vscode-html-languageservice](https://github.com/microsoft/vscode-html-languageservice) contains the language smarts for html. This extension wraps the html language service into a Language Server for VS Code. If you want to fix html issues or make improvements, you should make changes at [microsoft/vscode-html-languageservice](https://github.com/microsoft/vscode-html-languageservice). However, within this extension, you can run a development version of `vscode-html-languageservice` to debug code or test language features interactively: #### Linking `vscode-html-languageservice` in `html-language-features/server/` - Clone [microsoft/vscode-html-languageservice](https://github.com/microsoft/vscode-html-languageservice) - Run `npm i` in `vscode-html-languageservice` - Run `npm link` in `vscode-html-languageservice`. This will compile and link `vscode-html-languageservice` - In `html-language-features/server/`, run `npm link vscode-html-languageservice` #### Testing the development version of `vscode-html-languageservice` - Open both `vscode-html-languageservice` and this extension in two windows or with a single window with the[multi-root workspace](https://code.visualstudio.com/docs/editor/multi-root-workspaces) feature - Run `npm run watch` at `html-languagefeatures/server/` to recompile this extension with the linked version of `vscode-html-languageservice` - Make some changes in `vscode-html-languageservice` - Now when you run `Launch Extension` debug target, the launched instance will use your development version of `vscode-html-languageservice`. You can interactively test the language features.
{ "source": "voideditor/void", "title": "extensions/html-language-features/CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/extensions/html-language-features/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 3206 }
# Language Features for HTML **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [HTML in Visual Studio Code](https://code.visualstudio.com/docs/languages/html) to learn about the features of this extension. Please read the [CONTRIBUTING.md](https://github.com/microsoft/vscode/blob/master/extensions/html-language-features/CONTRIBUTING.md) file to learn how to contribute to this extension.
{ "source": "voideditor/void", "title": "extensions/html-language-features/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/html-language-features/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 462 }
# Jupyter for Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension provides the following Jupyter-related features for VS Code: - Open, edit and save .ipynb files
{ "source": "voideditor/void", "title": "extensions/ipynb/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/ipynb/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 262 }
# Jake - JavaScript build tool **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension supports running [Jake](http://jakejs.com/) tasks defined in a `Jakefile.js` file as [VS Code tasks](https://code.visualstudio.com/docs/editor/tasks). Jake tasks with the name 'build', 'compile', or 'watch' are treated as build tasks. To run Jake tasks, use the **Tasks** menu. ## Settings - `jake.autoDetect` - Enable detecting tasks from `Jakefile.js` files, the default is `on`.
{ "source": "voideditor/void", "title": "extensions/jake/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/jake/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 548 }
## Setup - Clone [microsoft/vscode](https://github.com/microsoft/vscode) - Run `npm i` at `/`, this will install - Dependencies for `/extension/json-language-features/` - Dependencies for `/extension/json-language-features/server/` - devDependencies such as `gulp` - Open `/extensions/json-language-features/` as the workspace in VS Code - In `/extensions/json-language-features/` run `npm run compile`(or `npm run watch`) to build the client and server - Run the [`Launch Extension`](https://github.com/microsoft/vscode/blob/master/extensions/json-language-features/.vscode/launch.json) debug target in the Debug View. This will: - Launch a new VS Code instance with the `json-language-features` extension loaded - Open a `.json` file to activate the extension. The extension will start the JSON language server process. - Add `"json.trace.server": "verbose"` to the settings to observe the communication between client and server in the `JSON Language Server` output. - Debug the extension and the language server client by setting breakpoints in`json-language-features/client/` - Debug the language server process by using `Attach to Node Process` command in the VS Code window opened on `json-language-features`. - Pick the process that contains `jsonServerMain` in the command line. Hover over `code-insiders` resp `code` processes to see the full process command line. - Set breakpoints in `json-language-features/server/` - Run `Reload Window` command in the launched instance to reload the extension ### Contribute to vscode-json-languageservice [microsoft/vscode-json-languageservice](https://github.com/microsoft/vscode-json-languageservice) is the library that implements the language smarts for JSON. The JSON language server forwards most the of requests to the service library. If you want to fix JSON issues or make improvements, you should make changes at [microsoft/vscode-json-languageservice](https://github.com/microsoft/vscode-json-languageservice). However, within this extension, you can run a development version of `vscode-json-languageservice` to debug code or test language features interactively: #### Linking `vscode-json-languageservice` in `json-language-features/server/` - Clone [microsoft/vscode-json-languageservice](https://github.com/microsoft/vscode-json-languageservice) - Run `npm i` in `vscode-json-languageservice` - Run `npm link` in `vscode-json-languageservice`. This will compile and link `vscode-json-languageservice` - In `json-language-features/server/`, run `npm link vscode-json-languageservice` #### Testing the development version of `vscode-json-languageservice` - Open both `vscode-json-languageservice` and this extension in two windows or with a single window with the[multi-root workspace](https://code.visualstudio.com/docs/editor/multi-root-workspaces) feature. - Run `npm run watch` at `json-languagefeatures/server/` to recompile this extension with the linked version of `vscode-json-languageservice` - Make some changes in `vscode-json-languageservice` - Now when you run `Launch Extension` debug target, the launched instance will use your development version of `vscode-json-languageservice`. You can interactively test the language features.
{ "source": "voideditor/void", "title": "extensions/json-language-features/CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/extensions/json-language-features/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 3226 }
# Language Features for JSON files **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [JSON in Visual Studio Code](https://code.visualstudio.com/docs/languages/json) to learn about the features of this extension.
{ "source": "voideditor/void", "title": "extensions/json-language-features/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/json-language-features/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 283 }
# Language Features for Markdown files **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [Markdown in Visual Studio Code](https://code.visualstudio.com/docs/languages/markdown) to learn about the features of this extension.
{ "source": "voideditor/void", "title": "extensions/markdown-language-features/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/markdown-language-features/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 295 }
# Markdown Math **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. Adds math rendering using [KaTeX](https://katex.org) to VS Code's built-in markdown preview and markdown cells in notebooks.
{ "source": "voideditor/void", "title": "extensions/markdown-math/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/markdown-math/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 245 }
# Media Preview **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension provides basic preview for images, audio and video files. ### Supported image file extensions - `.jpg`, `.jpe`, `.jpeg` - `.png` - `.bmp` - `.gif` - `.ico` - `.webp` - `.avif` ### Supported audio formats - `.mp3` - `.wav` - `.ogg`, `.oga` ### Supported video formats - `.mp4` (does not support `aac` audio tracks) - `.webm` (vp8 only)
{ "source": "voideditor/void", "title": "extensions/media-preview/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/media-preview/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 489 }
# Merge Conflict **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [Merge Conflicts in VS Code](https://code.visualstudio.com/docs/editor/versioncontrol#_merge-conflicts) to learn about features of this extension.
{ "source": "voideditor/void", "title": "extensions/merge-conflict/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/merge-conflict/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 285 }
# Microsoft Authentication for Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension provides support for authenticating to Microsoft. It registers the `microsoft` Authentication Provider that can be leveraged by other extensions. This also provides the Microsoft authentication used by Settings Sync. Additionally, it provides the `microsoft-sovereign-cloud` Authentication Provider that can be used to sign in to other Azure clouds like Azure for US Government or Azure China. Use the setting `microsoft-sovereign-cloud.endpoint` to select the authentication endpoint the provider should use. Please note that different scopes may also be required in different environments.
{ "source": "voideditor/void", "title": "extensions/microsoft-authentication/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/microsoft-authentication/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 775 }
# Builtin Notebook Output Renderers for Visual Studio Code **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features This extension provides the following notebook renderers for VS Code: - Image renderer for png, jpeg and gif
{ "source": "voideditor/void", "title": "extensions/notebook-renderers/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/notebook-renderers/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 286 }
# Node npm **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features ### Task Running This extension supports running npm scripts defined in the `package.json` as [tasks](https://code.visualstudio.com/docs/editor/tasks). Scripts with the name 'build', 'compile', or 'watch' are treated as build tasks. To run scripts as tasks, use the **Tasks** menu. For more information about auto detection of Tasks, see the [documentation](https://code.visualstudio.com/Docs/editor/tasks#_task-autodetection). ### Script Explorer The Npm Script Explorer shows the npm scripts found in your workspace. The explorer view is enabled by the setting `npm.enableScriptExplorer`. A script can be opened, run, or debug from the explorer. ### Run Scripts from the Editor The extension supports to run the selected script as a task when editing the `package.json`file. You can either run a script from the hover shown on a script or using the command `Run Selected Npm Script`. ### Run Scripts from a Folder in the Explorer The extension supports running a script as a task from a folder in the Explorer. The command `Run NPM Script in Folder...` shown in the Explorer context menu finds all scripts in `package.json` files that are contained in this folder. You can then select the script to be executed as a task from the resulting list. You enable this support with the `npm.runScriptFromFolder` which is `false` by default. ### Others The extension fetches data from <https://registry.npmjs.org> and <https://registry.bower.io> to provide auto-completion and information on hover features on npm dependencies. ## Settings - `npm.autoDetect` - Enable detecting scripts as tasks, the default is `on`. - `npm.runSilent` - Run npm script with the `--silent` option, the default is `false`. - `npm.packageManager` - The package manager used to run the scripts: `auto`, `npm`, `yarn`, `pnpm` or `bun`. The default is `auto`, which detects your package manager based on files in your workspace. - `npm.exclude` - Glob patterns for folders that should be excluded from automatic script detection. The pattern is matched against the **absolute path** of the package.json. For example, to exclude all test folders use '&ast;&ast;/test/&ast;&ast;'. - `npm.enableScriptExplorer` - Enable an explorer view for npm scripts. - `npm.scriptExplorerAction` - The default click action: `open` or `run`, the default is `open`. - `npm.enableRunFromFolder` - Enable running npm scripts from the context menu of folders in Explorer, the default is `false`. - `npm.scriptCodeLens.enable` - Enable/disable the code lenses to run a script, the default is `false`.
{ "source": "voideditor/void", "title": "extensions/npm/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/npm/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 2692 }
## 0.0.48 - Support `%n` in ProxyCommand - fix: add missing direct @types/ssh2-stream dependency (#177) - fix Win32 internal error (#178) ## 0.0.47 - Add support for loong64 (#175) - Add s390x support (#174) - Support vscodium alpine reh (#142) ## 0.0.46 - Add riscv64 support (#147) ## 0.0.45 - Use windows-x64 server on windows-arm64 ## 0.0.44 - Update ssh2 lib - Properly set extensionHost env variables ## 0.0.43 - Fix parsing multiple include directives ## 0.0.42 - Fix remote label to show port when connecting to a port other than 22 ## 0.0.41 - Take into account parsed port from ssh destination. Fixes (#110) ## 0.0.40 - Update ssh-config package ## 0.0.39 - output error messages when downloading vscode server (#39) - Add PreferredAuthentications support (#97) ## 0.0.38 - Enable remote support for ppc64le (#93) ## 0.0.37 - Default to Current OS User in Connection String if No User Provided (#91) - Add support for (unofficial) DragonFly reh (#86) ## 0.0.36 - Make wget support continue download (#85) ## 0.0.35 - Fixes hardcoded agentsock for windows breaks pageant compatibility (#81) ## 0.0.34 - Add remote.SSH.connectTimeout setting - adding %r username replacement to proxycommand (#77) ## 0.0.33 - feat: support %r user substitution in proxycommand ## 0.0.32 - feat: use serverDownloadUrlTemplate from product.json (#59) ## 0.0.31 - feat: support glob patterns in SSH include directives ## 0.0.30 - feat: support file patterns in SSH include directives
{ "source": "voideditor/void", "title": "extensions/open-remote-ssh/CHANGELOG.md", "url": "https://github.com/voideditor/void/blob/main/extensions/open-remote-ssh/CHANGELOG.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1499 }
# Open Remote - SSH ## SSH Host Requirements You can connect to a running SSH server on the following platforms. **Supported**: - x86_64 Debian 8+, Ubuntu 16.04+, CentOS / RHEL 7+ Linux. - ARMv7l (AArch32) Raspbian Stretch/9+ (32-bit). - ARMv8l (AArch64) Ubuntu 18.04+ (64-bit). - macOS 10.14+ (Mojave) - Windows 10+ - FreeBSD 13 (Requires manual remote-extension-host installation) - DragonFlyBSD (Requires manual remote-extension-host installation) ## Requirements **Activation** Enable the extension in your `argv.json` ```json { ... "enable-proposed-api": [ ..., "jeanp413.open-remote-ssh", ] ... } ``` which you can open by running the `Preferences: Configure Runtime Arguments` command. The file is located in `~/.vscode-oss/argv.json`. **Alpine linux** When running on alpine linux, the packages `libstdc++` and `bash` are necessary and can be installed via running ```bash sudo apk add bash libstdc++ ``` ## SSH configuration file [OpenSSH](https://www.openssh.com/) supports using a [configuration file](https://linuxize.com/post/using-the-ssh-config-file/) to store all your different SSH connections. To use an SSH config file, run the `Remote-SSH: Open SSH Configuration File...` command.
{ "source": "voideditor/void", "title": "extensions/open-remote-ssh/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/open-remote-ssh/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1246 }
# Language Features for PHP files **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [PHP in Visual Studio Code](https://code.visualstudio.com/docs/languages/php) to learn about the features of this extension.
{ "source": "voideditor/void", "title": "extensions/php-language-features/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/php-language-features/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 280 }
# References View This extension shows reference search results as separate view, just like search results. It complements the peek view presentation that is also built into VS Code. The following feature are available: * List All References via the Command Palette, the Context Menu, or via <kbd>Alt+Shift+F12</kbd> * View references in a dedicated tree view that sits in the sidebar * Navigate through search results via <kbd>F4</kbd> and <kbd>Shift+F4</kbd> * Remove references from the list via inline commands ![](https://raw.githubusercontent.com/microsoft/vscode-references-view/master/media/demo.png) **Note** that this extension is bundled with Visual Studio Code version 1.29 and later - it doesn't need to be installed anymore. ## Requirements This extension is just an alternative UI for reference search and extensions implementing reference search must still be installed. ## Issues This extension ships with Visual Studio Code and uses its issue tracker. Please file issue here: https://github.com/Microsoft/vscode/issues # Contributing This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com. When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA. This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
{ "source": "voideditor/void", "title": "extensions/references-view/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/references-view/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1961 }
# Language Features for Search Result files **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. This extension provides Syntax Highlighting, Symbol Information, Result Highlighting, and Go to Definition capabilities for the Search Results Editor.
{ "source": "voideditor/void", "title": "extensions/search-result/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/search-result/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 299 }
# Simple Browser **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. Provides a very basic browser preview using an iframe embedded in a [webviewW](). This extension is primarily meant to be used by other extensions for showing simple web content.
{ "source": "voideditor/void", "title": "extensions/simple-browser/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/simple-browser/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 300 }
# theme-seti This is an icon theme that uses the icons from [`seti-ui`](https://github.com/jesseweed/seti-ui). ## Previewing icons There is a [`./icons/preview.html`](./icons/preview.html) file that can be opened to see all of the icons included in the theme. To view this, it needs to be hosted by a web server. The easiest way is to open the file with the `Open with Live Server` command from the [Live Server extension](https://marketplace.visualstudio.com/items?itemName=ritwickdey.LiveServer). ## Updating icons - Make a PR against https://github.com/jesseweed/seti-ui` with your icon changes. - Once accepted there, ping us or make a PR yourself that updates the theme and font here To adopt the latest changes from https://github.com/jesseweed/seti-ui: - have the main branches of `https://github.com/jesseweed/seti-ui` and `https://github.com/microsoft/vscode` cloned in the same parent folder - in the `seti-ui` folder, run `npm install` and `npm run prepublishOnly`. This will generate updated icons and fonts. - in the `vscode/extensions/theme-seti` folder run `npm run update`. This will launch the [icon theme update script](build/update-icon-theme.js) that updates the theme as well as the font based on content in `seti-ui`. - to test the icon theme, look at the icon preview as described above. - when done, create a PR with the changes in https://github.com/microsoft/vscode. Add a screenshot of the preview page to accompany it. ### Languages not shipped with `vscode` Languages that are not shipped with `vscode` must be added to the `nonBuiltInLanguages` object inside of `update-icon-theme.js`. These should match [the file mapping in `seti-ui`](https://github.com/jesseweed/seti-ui/blob/master/styles/components/icons/mapping.less). Please try and keep this list in alphabetical order! Thank you.
{ "source": "voideditor/void", "title": "extensions/theme-seti/CONTRIBUTING.md", "url": "https://github.com/voideditor/void/blob/main/extensions/theme-seti/CONTRIBUTING.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1832 }
# theme-seti This is an icon theme that uses the icons from [`seti-ui`](https://github.com/jesseweed/seti-ui). ## Updating icons There is script that can be used to update icons, [./build/update-icon-theme.js](build/update-icon-theme.js). To run this script, run `npm run update` from the `theme-seti` directory. This can be run in one of two ways: looking at a local copy of `seti-ui` for icons, or getting them straight from GitHub. If you want to run it from a local copy of `seti-ui`, first clone [`seti-ui`](https://github.com/jesseweed/seti-ui) to the folder next to your `vscode` repo (from the `theme-seti` directory, `../../`). Then, inside the `set-ui` directory, run `npm install` followed by `npm run prepublishOnly`. This will generate updated icons. If you want to download the icons straight from GitHub, change the `FROM_DISK` variable to `false` inside of `update-icon-theme.js`. ### Languages not shipped with `vscode` Languages that are not shipped with `vscode` must be added to the `nonBuiltInLanguages` object inside of `update-icon-theme.js`. These should match [the file mapping in `seti-ui`](https://github.com/jesseweed/seti-ui/blob/master/styles/components/icons/mapping.less). Please try and keep this list in alphabetical order! Thank you. ## Previewing icons There is a [`./icons/preview.html`](./icons/preview.html) file that can be opened to see all of the icons included in the theme. Note that to view this, it needs to be hosted by a web server. When updating icons, it is always a good idea to make sure that they work properly by looking at this page. When submitting a PR that updates these icons, a screenshot of the preview page should accompany it.
{ "source": "voideditor/void", "title": "extensions/theme-seti/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/theme-seti/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1703 }
# Language Features for TypeScript and JavaScript files **Notice:** This extension is bundled with Visual Studio Code. It can be disabled but not uninstalled. ## Features See [TypeScript in Visual Studio Code](https://code.visualstudio.com/docs/languages/typescript) and [JavaScript in Visual Studio Code](https://code.visualstudio.com/docs/languages/javascript) to learn about the features of this extension.
{ "source": "voideditor/void", "title": "extensions/typescript-language-features/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/typescript-language-features/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 412 }
# vscode-dts This is the place for the stable API and for API proposals. ## Consume a proposal 1. find a proposal you are interested in 1. add its name to your extensions `package.json#enabledApiProposals` property 1. run `npx vscode-dts dev` to download the `d.ts` files into your project 1. don't forget that extension using proposed API cannot be published 1. learn more here: <https://code.visualstudio.com/api/advanced-topics/using-proposed-api> ## Add a new proposal 1. create a _new_ file in this directory, its name must follow this pattern `vscode.proposed.[a-zA-Z]+.d.ts` 1. creating the proposal-file will automatically update `src/vs/platform/extensions/common/extensionsApiProposals.ts` (make sure to run `npm run watch`) 1. declare and implement your proposal 1. make sure to use the `checkProposedApiEnabled` and/or `isProposedApiEnabled`-utils to enforce the API being proposed. Make sure to invoke them with your proposal's name which got generated into `extensionsApiProposals.ts` 1. Most likely will need to add your proposed api to vscode-api-tests as well
{ "source": "voideditor/void", "title": "src/vscode-dts/README.md", "url": "https://github.com/voideditor/void/blob/main/src/vscode-dts/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 1081 }
# VS Code Automation Package This package contains functionality for automating various components of the VS Code UI, via an automation "driver" that connects from a separate process. It is used by the `smoke` tests.
{ "source": "voideditor/void", "title": "test/automation/README.md", "url": "https://github.com/voideditor/void/blob/main/test/automation/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 217 }
# Monaco Editor Test This directory contains scripts that are used to smoke test the Monaco Editor distribution. ## Setup & Bundle $test/monaco> npm i $test/monaco> npm run bundle ## Compile and run tests $test/monaco> npm run compile $test/monaco> npm run test
{ "source": "voideditor/void", "title": "test/monaco/README.md", "url": "https://github.com/voideditor/void/blob/main/test/monaco/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 270 }
# VS Code Smoke Tests Failures History This file contains a history of smoke test failures which could be avoided if particular techniques were used in the test (e.g. binding test elements with HTML5 `data-*` attribute). To better understand what can be employed in smoke test to ensure its stability, it is important to understand patterns that led to smoke test breakage. This markdown is a result of work on [this issue](https://github.com/microsoft/vscode/issues/27906). ## Log 1. This following change led to the smoke test failure because DOM element's attribute `a[title]` was changed: [eac49a3](https://github.com/microsoft/vscode/commit/eac49a321b84cb9828430e9dcd3f34243a3480f7) This attribute was used in the smoke test to grab the contents of SCM part in status bar: [0aec2d6](https://github.com/microsoft/vscode/commit/0aec2d6838b5e65cc74c33b853ffbd9fa191d636) 2. To be continued...
{ "source": "voideditor/void", "title": "test/smoke/Audit.md", "url": "https://github.com/voideditor/void/blob/main/test/smoke/Audit.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 903 }
# VS Code Smoke Test Make sure you are on **Node v12.x**. ## Quick Overview ```bash # Build extensions in the VS Code repo (if needed) npm i && npm run compile # Dev (Electron) npm run smoketest # Dev (Web - Must be run on distro) npm run smoketest -- --web --browser [chromium|webkit] # Build (Electron) npm run smoketest -- --build <path to latest version> example: npm run smoketest -- --build /Applications/Visual\ Studio\ Code\ -\ Insiders.app # Build (Web - read instructions below) npm run smoketest -- --build <path to server web build (ends in -web)> --web --browser [chromium|webkit] # Remote (Electron) npm run smoketest -- --build <path to latest version> --remote ``` \* This step is necessary only when running without `--build` and OSS doesn't already exist in the `.build/electron` directory. ### Running for a release (Endgame) You must always run the smoketest version that matches the release you are testing. So, if you want to run the smoketest for a release build (e.g. `release/1.22`), you need to check out that version of the smoke tests too: ```bash git fetch git checkout release/1.22 npm i && npm run compile cd test/smoke npm i ``` #### Web There is no support for testing an old version to a new one yet. Instead, simply configure the `--build` command line argument to point to the absolute path of the extracted server web build folder (e.g. `<rest of path here>/vscode-server-darwin-x64-web` for macOS). The server web build is available from the builds page (see previous subsection). **macOS**: if you have downloaded the server with web bits, make sure to run the following command before unzipping it to avoid security issues on startup: ```bash xattr -d com.apple.quarantine <path to server with web folder zip> ``` **Note**: make sure to point to the server that includes the client bits! ### Debug - `--verbose` logs all the low level driver calls made to Code; - `-f PATTERN` (alias `-g PATTERN`) filters the tests to be run. You can also use pretty much any mocha argument; - `--headless` will run playwright in headless mode when `--web` is used. **Note**: you can enable verbose logging of playwright library by setting a `DEBUG` environment variable before running the tests (<https://playwright.dev/docs/debug#verbose-api-logs>), for example to `pw:browser`. ### Develop ```bash cd test/smoke npm run watch ``` ## Troubleshooting ### Error: Could not get a unique tmp filename, max tries reached On Windows, check for the folder `C:\Users\<username>\AppData\Local\Temp\t`. If this folder exists, the `tmp` module can't run properly, resulting in the error above. In this case, delete the `t` folder. ## Pitfalls - Beware of workbench **state**. The tests within a single suite will share the same state. - Beware of **singletons**. This evil can, and will, manifest itself under the form of FS paths, TCP ports, IPC handles. Whenever writing a test, or setting up more smoke test architecture, make sure it can run simultaneously with any other tests and even itself. All test suites should be able to run many times in parallel. - Beware of **focus**. **Never** depend on DOM elements having focus using `.focused` classes or `:focus` pseudo-classes, since they will lose that state as soon as another window appears on top of the running VS Code window. A safe approach which avoids this problem is to use the `waitForActiveElement` API. Many tests use this whenever they need to wait for a specific element to _have focus_. - Beware of **timing**. You need to read from or write to the DOM... but is it the right time to do that? Can you 100% guarantee that `input` box will be visible at that point in time? Or are you just hoping that it will be so? Hope is your worst enemy in UI tests. Example: just because you triggered Quick Access with `F1`, it doesn't mean that it's open and you can just start typing; you must first wait for the input element to be in the DOM as well as be the current active element. - Beware of **waiting**. **Never** wait longer than a couple of seconds for anything, unless it's justified. Think of it as a human using Code. Would a human take 10 minutes to run through the Search viewlet smoke test? Then, the computer should even be faster. **Don't** use `setTimeout` just because. Think about what you should wait for in the DOM to be ready and wait for that instead.
{ "source": "voideditor/void", "title": "test/smoke/README.md", "url": "https://github.com/voideditor/void/blob/main/test/smoke/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 4382 }
# Unit Tests ## Run (inside Electron) ./scripts/test.[sh|bat] All unit tests are run inside a Electron renderer environment which access to DOM and Nodejs api. This is the closest to the environment in which VS Code itself ships. Notes: - use the `--debug` to see an electron window with dev tools which allows for debugging - to run only a subset of tests use the `--run` or `--glob` options - use `npm run watch` to automatically compile changes For instance, `./scripts/test.sh --debug --glob **/extHost*.test.js` runs all tests from `extHost`-files and enables you to debug them. ## Run (inside browser) npm run test-browser -- --browser webkit --browser chromium Unit tests from layers `common` and `browser` are run inside `chromium`, `webkit`, and (soon'ish) `firefox` (using playwright). This complements our electron-based unit test runner and adds more coverage of supported platforms. Notes: - these tests are part of the continuous build, that means you might have test failures that only happen with webkit on _windows_ or _chromium_ on linux - you can run these tests locally via `npm run test-browser -- --browser chromium --browser webkit` - to debug, open `<vscode>/test/unit/browser/renderer.html` inside a browser and use the `?m=<amd_module>`-query to specify what AMD module to load, e.g `file:///Users/jrieken/Code/vscode/test/unit/browser/renderer.html?m=vs/base/test/common/strings.test` runs all tests from `strings.test.ts` - to run only a subset of tests use the `--run` or `--glob` options **Note**: you can enable verbose logging of playwright library by setting a `DEBUG` environment variable before running the tests (https://playwright.dev/docs/debug#verbose-api-logs) ## Run (with node) npm run test-node -- --run src/vs/editor/test/browser/controller/cursor.test.ts ## Coverage The following command will create a `coverage` folder in the `.build` folder at the root of the workspace: ### OS X and Linux ./scripts/test.sh --coverage ### Windows scripts\test --coverage
{ "source": "voideditor/void", "title": "test/unit/README.md", "url": "https://github.com/voideditor/void/blob/main/test/unit/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 2040 }
The file `JavaScript.tmLanguage.json` is derived from [TypeScriptReact.tmLanguage](https://github.com/microsoft/TypeScript-TmLanguage/blob/master/TypeScriptReact.tmLanguage). To update to the latest version: - `cd extensions/typescript` and run `npm run update-grammars` - don't forget to run the integration tests at `./scripts/test-integration.sh` The script does the following changes: - fileTypes .tsx -> .js & .jsx - scopeName scope.tsx -> scope.js - update all rule names .tsx -> .js
{ "source": "voideditor/void", "title": "extensions/javascript/syntaxes/Readme.md", "url": "https://github.com/voideditor/void/blob/main/extensions/javascript/syntaxes/Readme.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 493 }
# VSCode JSON Language Server [![NPM Version](https://img.shields.io/npm/v/vscode-json-languageserver.svg)](https://npmjs.org/package/vscode-json-languageserver) [![NPM Downloads](https://img.shields.io/npm/dm/vscode-json-languageserver.svg)](https://npmjs.org/package/vscode-json-languageserver) [![NPM Version](https://img.shields.io/npm/l/vscode-json-languageserver.svg)](https://npmjs.org/package/vscode-json-languageserver) The JSON Language server provides language-specific smarts for editing, validating and understanding JSON documents. It runs as a separate executable and implements the [language server protocol](https://microsoft.github.io/language-server-protocol/overview) to be connected by any code editor or IDE. ## Capabilities ### Server capabilities The JSON language server supports requests on documents of language id `json` and `jsonc`. - `json` documents are parsed and validated following the [JSON specification](https://tools.ietf.org/html/rfc7159). - `jsonc` documents additionally accept single line (`//`) and multi-line comments (`/* ... */`). JSONC is a VSCode specific file format, intended for VSCode configuration files, without any aspirations to define a new common file format. The server implements the following capabilities of the language server protocol: - [Code completion](https://microsoft.github.io/language-server-protocol/specification#textDocument_completion) for JSON properties and values based on the document's [JSON schema](http://json-schema.org/) or based on existing properties and values used at other places in the document. JSON schemas are configured through the server configuration options. - [Hover](https://microsoft.github.io/language-server-protocol/specification#textDocument_hover) for values based on descriptions in the document's [JSON schema](http://json-schema.org/). - [Document Symbols](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentSymbol) for quick navigation to properties in the document. - [Document Colors](https://microsoft.github.io/language-server-protocol/specification#textDocument_documentColor) for showing color decorators on values representing colors and [Color Presentation](https://microsoft.github.io/language-server-protocol/specification#textDocument_colorPresentation) for color presentation information to support color pickers. The location of colors is defined by the document's [JSON schema](http://json-schema.org/). All values marked with `"format": "color-hex"` (VSCode specific, non-standard JSON Schema extension) are considered color values. The supported color formats are `#rgb[a]` and `#rrggbb[aa]`. - [Code Formatting](https://microsoft.github.io/language-server-protocol/specification#textDocument_rangeFormatting) supporting ranges and formatting the whole document. - [Folding Ranges](https://microsoft.github.io/language-server-protocol/specification#textDocument_foldingRange) for all folding ranges in the document. - Semantic Selection for semantic selection for one or multiple cursor positions. - [Goto Definition](https://microsoft.github.io/language-server-protocol/specification#textDocument_definition) for $ref references in JSON schemas - [Diagnostics (Validation)](https://microsoft.github.io/language-server-protocol/specification#textDocument_publishDiagnostics) are pushed for all open documents - syntax errors - structural validation based on the document's [JSON schema](http://json-schema.org/). In order to load JSON schemas, the JSON server uses NodeJS `http` and `fs` modules. For all other features, the JSON server only relies on the documents and settings provided by the client through the LSP. ### Client requirements The JSON language server expects the client to only send requests and notifications for documents of language id `json` and `jsonc`. The JSON language server has the following dependencies on the client's capabilities: - Code completion requires that the client capability has *snippetSupport*. If not supported by the client, the server will not offer the completion capability. - Formatting support requires the client to support *dynamicRegistration* for *rangeFormatting*. If not supported by the client, the server will not offer the format capability. ## Configuration ### Initialization options The client can send the following initialization options to the server: - `provideFormatter: boolean | undefined`. If defined, the value defines whether the server provides the `documentRangeFormattingProvider` capability on initialization. If undefined, the setting `json.format.enable` is used to determine whether formatting is provided. The formatter will then be registered through dynamic registration. If the client does not support dynamic registration, no formatter will be available. - `handledSchemaProtocols`: The URI schemas handles by the server. See section `Schema configuration` below. - `customCapabilities`: Additional non-LSP client capabilities: - `rangeFormatting: { editLimit: x } }`: For performance reasons, limit the number of edits returned by the range formatter to `x`. ### Settings Clients may send a `workspace/didChangeConfiguration` notification to notify the server of settings changes. The server supports the following settings: - http - `proxy`: The URL of the proxy server to use when fetching schema. When undefined or empty, no proxy is used. - `proxyStrictSSL`: Whether the proxy server certificate should be verified against the list of supplied CAs. - json - `format` - `enable`: Whether the server should register the formatting support. This option is only applicable if the client supports *dynamicRegistration* for *rangeFormatting* and `initializationOptions.provideFormatter` is not defined. - `validate` - `enable`: Whether the server should validate. Defaults to `true` if not set. - `schemas`: Configures association of file names to schema URL or schemas and/or associations of schema URL to schema content. - `fileMatch`: an array of file names or paths (separated by `/`). `*` can be used as a wildcard. Exclusion patterns can also be defined and start with '!'. A file matches when there is at least one matching pattern and the last matching pattern is not an exclusion pattern. - `folderUri`: If provided, the association is only used if the document is located in the given folder (directly or in a subfolder) - `url`: The URL of the schema, optional when also a schema is provided. - `schema`: The schema content, optional - `resultLimit`: The max number of color decorators and outline symbols to be computed (for performance reasons) - `jsonFoldingLimit`: The max number of folding ranges to be computed for json documents (for performance reasons) - `jsoncFoldingLimit`: The max number of folding ranges to be computed for jsonc documents (for performance reasons) ```json { "http": { "proxy": "", "proxyStrictSSL": true }, "json": { "format": { "enable": true }, "schemas": [ { "fileMatch": [ "foo.json", "*.superfoo.json" ], "url": "http://json.schemastore.org/foo", "schema": { "type": "array" } } ] } } ``` ### Schema configuration and custom schema content delivery [JSON schemas](http://json-schema.org/) are essential for code assist, hovers, color decorators to work and are required for structural validation. To find the schema for a given JSON document, the server uses the following mechanisms: - JSON documents can define the schema URL using a `$schema` property - The settings define a schema association based on the documents URL. Settings can either associate a schema URL to a file or path pattern, and they can directly provide a schema. - Additionally, schema associations can also be provided by a custom 'schemaAssociations' configuration call. Schemas are identified by URLs. To load the content of a schema, the JSON language server either tries to load from that URI or path itself or delegates to the client. The `initializationOptions.handledSchemaProtocols` initialization option defines which URLs are handled by the server. Requests for all other URIs are sent to the client. `handledSchemaProtocols` is part of the initialization options and can't be changed while the server is running. ```ts let clientOptions: LanguageClientOptions = { initializationOptions: { handledSchemaProtocols: ['file'] // language server should only try to load file URLs } ... } ``` If `handledSchemaProtocols` is not set, the JSON language server will load the following URLs itself: - `http`, `https`: Loaded using NodeJS's HTTP support. Proxies can be configured through the settings. - `file`: Loaded using NodeJS's `fs` support. #### Schema content request Requests for schemas with URLs not handled by the server are forwarded to the client through an LSP request. This request is a JSON language server-specific, non-standardized, extension to the LSP. Request: - method: 'vscode/content' - params: `string` - The schema URL to request. - response: `string` - The content of the schema with the given URL #### Schema content change notification When the client is aware that a schema content has changed, it will notify the server through a notification. This notification is a JSON language server-specific, non-standardized, extension to the LSP. The server will, as a response, clear the schema content from the cache and reload the schema content when required again. #### Schema associations notification In addition to the settings, schemas associations can also be provided through a notification from the client to the server. This notification is a JSON language server-specific, non-standardized, extension to the LSP. Notification: - method: 'json/schemaAssociations' - params: `ISchemaAssociations` or `ISchemaAssociation[]` defined as follows ```ts interface ISchemaAssociations { /** * An object where: * - keys are file names or file paths (using `/` as path separator). `*` can be used as a wildcard. * - values are an arrays of schema URIs */ [pattern: string]: string[]; } interface ISchemaAssociation { /** * The URI of the schema, which is also the identifier of the schema. */ uri: string; /** * A list of file path patterns that are associated to the schema. The '*' wildcard can be used. Exclusion patterns starting with '!'. * For example '*.schema.json', 'package.json', '!foo*.schema.json'. * A match succeeds when there is at least one pattern matching and last matching pattern does not start with '!'. */ fileMatch: string[]; /** * If provided, the association is only used if the validated document is located in the given folder (directly or in a subfolder) */ folderUri?: string; /* * The schema for the given URI. * If no schema is provided, the schema will be fetched with the schema request service (if available). */ schema?: JSONSchema; } ``` `ISchemaAssociations` - keys: a file names or file path (separated by `/`). `*` can be used as a wildcard. - values: An array of schema URLs Notification: - method: 'json/schemaContent' - params: `string` the URL of the schema that has changed. ### Item Limit If the setting `resultLimit` is set, the JSON language server will limit the number of color symbols and document symbols computed. If the setting `jsonFoldingLimit` or `jsoncFoldingLimit` is set, the JSON language server will limit the number of folding ranges computed. ## Try The JSON language server is shipped with [Visual Studio Code](https://code.visualstudio.com/) as part of the built-in VSCode extension `json-language-features`. The server is started when the first JSON file is opened. The [VSCode JSON documentation](https://code.visualstudio.com/docs/languages/json) for detailed information on the user experience and has more information on how to configure the language support. ## Integrate If you plan to integrate the JSON language server into an editor and IDE, check out [this page](https://microsoft.github.io/language-server-protocol/implementors/tools/) if there's already an LSP client integration available. You can also launch the language server as a command and connect to it. For that, install the `vscode-json-languageserver` npm module: `npm install -g vscode-json-languageserver` Start the language server with the `vscode-json-languageserver` command. Use a command line argument to specify the preferred communication channel: ``` vscode-json-languageserver --node-ipc vscode-json-languageserver --stdio vscode-json-languageserver --socket=<port> ``` To connect to the server from NodeJS, see Remy Suen's great write-up on [how to communicate with the server](https://github.com/rcjsuen/dockerfile-language-server-nodejs#communicating-with-the-server) through the available communication channels. ## Participate The source code of the JSON language server can be found in the [VSCode repository](https://github.com/microsoft/vscode) at [extensions/json-language-features/server](https://github.com/microsoft/vscode/tree/master/extensions/json-language-features/server). File issues and pull requests in the [VSCode GitHub Issues](https://github.com/microsoft/vscode/issues). See the document [How to Contribute](https://github.com/microsoft/vscode/wiki/How-to-Contribute) on how to build and run from source. Most of the functionality of the server is located in libraries: - [jsonc-parser](https://github.com/microsoft/node-jsonc-parser) contains the JSON parser and scanner. - [vscode-json-languageservice](https://github.com/microsoft/vscode-json-languageservice) contains the implementation of all features as a re-usable library. - [vscode-languageserver-node](https://github.com/microsoft/vscode-languageserver-node) contains the implementation of language server for NodeJS. Help on any of these projects is very welcome. ## Code of Conduct This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [[email protected]](mailto:[email protected]) with any additional questions or comments. ## License Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the [MIT](https://github.com/microsoft/vscode/blob/master/LICENSE.txt) License.
{ "source": "voideditor/void", "title": "extensions/json-language-features/server/README.md", "url": "https://github.com/voideditor/void/blob/main/extensions/json-language-features/server/README.md", "date": "2024-09-11T02:37:00", "stars": 10433, "description": null, "file_size": 14742 }