text
stringlengths
55
456k
metadata
dict
podcastfy package ================= Subpackages ----------- .. toctree:: :maxdepth: 4 podcastfy.content_parser Submodules ---------- podcastfy.client module ----------------------- .. automodule:: podcastfy.client :members: :undoc-members: :show-inheritance: podcastfy.content\_generator module ----------------------------------- .. automodule:: podcastfy.content_generator :members: :undoc-members: :show-inheritance: podcastfy.text\_to\_speech module --------------------------------- .. automodule:: podcastfy.text_to_speech :members: :undoc-members: :show-inheritance: Module contents --------------- .. automodule:: podcastfy :members: :undoc-members: :show-inheritance:
{ "source": "souzatharsis/podcastfy", "title": "docs/source/podcastfy.rst", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/podcastfy.rst", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 730 }
# Podcastfy REST API Documentation ## Overview The Podcastfy API allows you to programmatically generate AI podcasts from various input sources. This document outlines the API endpoints and their usage. ## Using cURL with Podcastfy API ### Prerequisites 1. Confirm cURL installation: ```bash curl --version ``` ### API Request Flow Making a prediction requires two sequential requests: 1. POST request to initiate processing - returns an `EVENT_ID` 2. GET request to fetch results - uses the `EVENT_ID` to fetch results Between step 1 and 2, there is a delay of 1-3 minutes. We are working on reducing this delay and implementing a way to notify the user when the podcast is ready. Thanks for your patience! ### Basic Request Structure ```bash # Step 1: POST request to initiate processing # Make sure to include http:// or https:// in the URL curl -X POST https://thatupiso-podcastfy-ai-demo.hf.space/gradio_api/call/process_inputs \ -H "Content-Type: application/json" \ -d '{ "data": [ "text_input", "https://yourwebsite.com", [], # pdf_files [], # image_files "gemini_key", "openai_key", "elevenlabs_key", 2000, # word_count "engaging,fast-paced", # conversation_style "main summarizer", # roles_person1 "questioner", # roles_person2 "Introduction,Content,Conclusion", # dialogue_structure "PODCASTFY", # podcast_name "YOUR PODCAST", # podcast_tagline "openai", # tts_model 0.7, # creativity_level "" # user_instructions ] }' # Step 2: GET request to fetch results curl -N https://thatupiso-podcastfy-ai-demo.hf.space/gradio_api/call/process_inputs/$EVENT_ID # Example output result event: complete data: [{"path": "/tmp/gradio/bcb143f492b1c9a6dbde512557541e62f090bca083356be0f82c2e12b59af100/podcast_81106b4ca62542f1b209889832a421df.mp3", "url": "https://thatupiso-podcastfy-ai-demo.hf.space/gradio_a/gradio_api/file=/tmp/gradio/bcb143f492b1c9a6dbde512557541e62f090bca083356be0f82c2e12b59af100/podcast_81106b4ca62542f1b209889832a421df.mp3", "size": null, "orig_name": "podcast_81106b4ca62542f1b209889832a421df.mp3", "mime_type": null, "is_stream": false, "meta": {"_type": "gradio.FileData"}}] ``` You can download the file by extending the URL prefix "https://thatupiso-podcastfy-ai-demo.hf.space/gradio_a/gradio_api/file=" with the path to the file in variable `path`. (Note: The variable "url" above has a bug introduced by Gradio, so please ignore it.) ### Parameter Details | Index | Parameter | Type | Description | |-------|-----------|------|-------------| | 0 | text_input | string | Direct text input for podcast generation | | 1 | urls_input | string | URLs to process (include http:// or https://) | | 2 | pdf_files | array | List of PDF files to process | | 3 | image_files | array | List of image files to process | | 4 | gemini_key | string | Google Gemini API key | | 5 | openai_key | string | OpenAI API key | | 6 | elevenlabs_key | string | ElevenLabs API key | | 7 | word_count | number | Target word count for podcast | | 8 | conversation_style | string | Conversation style descriptors (e.g. "engaging,fast-paced") | | 9 | roles_person1 | string | Role of first speaker | | 10 | roles_person2 | string | Role of second speaker | | 11 | dialogue_structure | string | Structure of dialogue (e.g. "Introduction,Content,Conclusion") | | 12 | podcast_name | string | Name of the podcast | | 13 | podcast_tagline | string | Podcast tagline | | 14 | tts_model | string | Text-to-speech model ("gemini", "openai", "elevenlabs", or "edge") | | 15 | creativity_level | number | Level of creativity (0-1) | | 16 | user_instructions | string | Custom instructions for generation | ## Using Python ### Installation ```bash pip install gradio_client ``` ### Quick Start ```python from gradio_client import Client, handle_file client = Client("thatupiso/Podcastfy.ai_demo") ``` ### API Endpoints #### Generate Podcast (`/process_inputs`) Generates a podcast from provided text, URLs, PDFs, or images. ##### Parameters | Parameter | Type | Required | Default | Description | |-----------|------|----------|---------|-------------| | text_input | str | Yes | - | Raw text input for podcast generation | | urls_input | str | Yes | - | Comma-separated URLs to process | | pdf_files | List[filepath] | Yes | None | List of PDF files to process | | image_files | List[filepath] | Yes | None | List of image files to process | | gemini_key | str | No | "" | Google Gemini API key | | openai_key | str | No | "" | OpenAI API key | | elevenlabs_key | str | No | "" | ElevenLabs API key | | word_count | float | No | 2000 | Target word count for podcast | | conversation_style | str | No | "engaging,fast-paced,enthusiastic" | Conversation style descriptors | | roles_person1 | str | No | "main summarizer" | Role of first speaker | | roles_person2 | str | No | "questioner/clarifier" | Role of second speaker | | dialogue_structure | str | No | "Introduction,Main Content Summary,Conclusion" | Structure of dialogue | | podcast_name | str | No | "PODCASTFY" | Name of the podcast | | podcast_tagline | str | No | "YOUR PERSONAL GenAI PODCAST" | Podcast tagline | | tts_model | Literal['openai', 'elevenlabs', 'edge'] | No | "openai" | Text-to-speech model | | creativity_level | float | No | 0.7 | Level of creativity (0-1) | | user_instructions | str | No | "" | Custom instructions for generation | ##### Returns | Type | Description | |------|-------------| | filepath | Path to generated audio file | ##### Example Usage ```python from gradio_client import Client, handle_file client = Client("thatupiso/Podcastfy.ai_demo") # Generate podcast from URL result = client.predict( text_input="", urls_input="https://example.com/article", pdf_files=[], image_files=[], gemini_key="your-gemini-key", openai_key="your-openai-key", word_count=1500, conversation_style="casual,informative", podcast_name="Tech Talk", tts_model="openai", creativity_level=0.8 ) print(f"Generated podcast: {result}") ``` ### Error Handling The API will return appropriate error messages for: - Invalid API keys - Malformed input - Failed file processing - TTS generation errors ### Rate Limits Please be aware of the rate limits for the underlying services: - Gemini API - OpenAI API - ElevenLabs API ## Notes - At least one input source (text, URL, PDF, or image) must be provided - API keys are required for corresponding services - The generated audio file format is MP3
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/api.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/api.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 6561 }
## CLI Podcastfy can be used as a command-line interface (CLI) tool. See below some usage examples. Please make sure you follow configuration instructions first - [See Setup](README.md#setup). 1. Generate a podcast from URLs (using OpenAI TTS by default): ``` python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 ``` 2. Generate a podcast from URLs using ElevenLabs TTS: ``` python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs ``` 3. Generate a podcast from a file containing URLs: ``` python -m podcastfy.client --file path/to/urls.txt ``` 4. Generate a podcast from an existing transcript file: ``` python -m podcastfy.client --transcript path/to/transcript.txt ``` 5. Generate only a transcript (without audio) from URLs: ``` python -m podcastfy.client --url https://example.com/article1 --transcript-only ``` 6. Generate a podcast using a combination of URLs and a file: ``` python -m podcastfy.client --url https://example.com/article1 --file path/to/urls.txt ``` 7. Generate a podcast from image files: ``` python -m podcastfy.client --image path/to/image1.jpg --image path/to/image2.png ``` 8. Generate a podcast with a custom conversation configuration: ``` python -m podcastfy.client --url https://example.com/article1 --conversation-config path/to/custom_config.yaml ``` 9. Generate a podcast from URLs and images: ``` python -m podcastfy.client --url https://example.com/article1 --image path/to/image1.jpg ``` 10. Generate a transcript using a local LLM: ``` python -m podcastfy.client --url https://example.com/article1 --transcript-only --local ``` For more information on available options, use: ``` python -m podcastfy.client --help ``` 11. Generate a podcast from raw text input: ``` python -m podcastfy.client --text "Your raw text content here that you want to convert into a podcast" ```
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/cli.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/cli.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2043 }
# Podcastfy Configuration ## API keys The project uses a combination of a `.env` file for managing API keys and sensitive information, and a `config.yaml` file for non-sensitive configuration settings. Follow these steps to set up your configuration: 1. Create a `.env` file in the root directory of the project. 2. Add your API keys and other sensitive information to the `.env` file. For example: ``` GEMINI_API_KEY=your_gemini_api_key_here ELEVENLABS_API_KEY=your_elevenlabs_api_key_here OPENAI_API_KEY=your_openai_api_key_here ``` API Key Requirements: - `GEMINI_API_KEY`: Required for transcript generation if not using a [local llm](local_llm.md). (get your [free API key](aistudio.google.com/app/apikey)) - `OPENAI_API_KEY` or `ELEVENLABS_API_KEY`: Required for audio generation if not using Microsoft Edge TTS `tts_model=edge`. Ensure you have the necessary API keys based on your intended usage of Podcastfy. > [!Note] > Never share your `.env` file or commit it to version control. It contains sensitive information that should be kept private. The `config.yaml` file can be shared and version-controlled as it doesn't contain sensitive data. ## Example Configurations Here's a table showing example configurations: | Configuration | Base LLM | TTS Model | API Keys Required | |---------------|----------|-----------|-------------------| | Default | Gemini | OpenAI | GEMINI_API_KEY and OPENAI_API_KEY | | No API Keys Required | Local LLM | Edge | None | | Recommended | Gemini | 'gemini' (Google) | GEMINI_API_KEY | In our experience, ElevenLabs and Google TTS model are the best models in terms quality of audio generation with the latter having an edge over the former due to its multispeaker capability. ElevenLabs is the most expensive but it's easy to setup and offers great customization (voice options and multilingual capability). Google TTS model is cheaper but is limited to English only and requires some extra steps to set up. ## Setting up Google TTS Model You can use Google TTS model by setting the `tts_model` parameter to `gemini` in `Podcastfy`. Google TTS model requires a Google Cloud API key, you can use the same API key you are already using for Gemini or create a new one. After you have secured your API Key there are two additional steps in order to use Google Multispeaker TTS model: - Step 1: You will need to enable the Cloud Text-to-Speech API on the API key. - Go to "https://console.cloud.google.com/apis/dashboard" - Select your project (or create one by clicking on project list and then on "new project") - Click "+ ENABLE APIS AND SERVICES" at the top of the screen - Enter "text-to-speech" into the search box - Click on "Cloud Text-to-Speech API" and then on "ENABLE" - You should be here: "https://console.cloud.google.com/apis/library/texttospeech.googleapis.com?project=..." - Step 2: You need to add the Cloud Text-to-Speech API permission to the API KEY you're using on the Google Cloud console. - Go to https://console.cloud.google.com/apis/credentials - Click on whatever key you're using for Gemini - Go down to API Restrictions and add the Cloud Text-to-Speech API Phew!!! That was a lot of steps but you only need to do it once and you might be impressed with the quality of the audio. See [Google TTS](https://cloud.google.com/text-to-speech) for more details. Thank you @mobarski and @evandempsey for the help! ## Conversation Configuration See [conversation_custom.md](conversation_custom.md) for more details. ## Running Local LLMs See [local_llm.md](local_llm.md) for more details. ## Optional configuration The `config.yaml` file in the root directory contains non-sensitive configuration settings. You can modify this file to adjust various parameters such as output directories, text-to-speech settings, and content generation options. The application will automatically load the environment variables from `.env` and the configuration settings from `config.yaml` when it runs. See [Configuration](config_custom.md) if you would like to further customize settings.
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/config.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/config.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 4098 }
# Podcastfy Advanced Configuration Guide Podcastfy uses a `config.yaml` file to manage various settings and parameters. This guide explains each configuration option available in the file. ## Content Generator - `gemini_model`: "gemini-1.5-pro-latest" - The Gemini AI model used for content generation. - `max_output_tokens`: 8192 - Maximum number of tokens for the output generated by the AI model. - `temperature`: 1 - Controls randomness in the AI's output. 0 means deterministic responses. Range for gemini-1.5-pro: 0.0 - 2.0 (default: 1.0) - `langchain_tracing_v2`: false - Enables LangChain tracing for debugging and monitoring. If true, requires langsmith api key ## Content Extractor - `youtube_url_patterns`: - Patterns to identify YouTube URLs. - Current patterns: "youtube.com", "youtu.be" ## Website Extractor - `markdown_cleaning`: - `remove_patterns`: - Patterns to remove from extracted markdown content. - Current patterns remove image links, hyperlinks, and URLs. ## YouTube Transcriber - `remove_phrases`: - Phrases to remove from YouTube transcriptions. - Current phrase: "[music]" ## Logging - `level`: "INFO" - Default logging level. - `format`: "%(asctime)s - %(name)s - %(levelname)s - %(message)s" - Format string for log messages. ## Website Extractor - `markdown_cleaning`: - `remove_patterns`: - Additional patterns to remove from extracted markdown content: - '\[.*?\]': Remove square brackets and their contents - '\(.*?\)': Remove parentheses and their contents - '^\s*[-*]\s': Remove list item markers - '^\s*\d+\.\s': Remove numbered list markers - '^\s*#+': Remove markdown headers - `unwanted_tags`: - HTML tags to be removed during extraction: - 'script', 'style', 'nav', 'footer', 'header', 'aside', 'noscript' - `user_agent`: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36' - User agent string to be used for web requests - `timeout`: 10 - Request timeout in seconds for web scraping
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/config_custom copy.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/config_custom copy.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2054 }
# Podcastfy Advanced Configuration Guide Podcastfy uses a `config.yaml` file to manage various settings and parameters. This guide explains each configuration option available in the file. ## Customizing the Conversation See [conversation_custom.md](conversation_custom.md) for more details. ## Output Directories - `transcripts`: "./data/transcripts" - Directory where generated transcripts are saved. - `audio`: "./data/audio" - Directory where generated audio files are saved. ## Text-to-Speech (TTS) Settings ### ElevenLabs TTS - `default_voices`: - `question`: "Chris" - Default voice for questions in the podcast. - `answer`: "BrittneyHart" - Default voice for answers in the podcast. - `model`: "eleven_multilingual_v2" - The ElevenLabs TTS model to use. ### OpenAI TTS - `default_voices`: - `question`: "echo" - Default voice for questions using OpenAI TTS. - `answer`: "shimmer" - Default voice for answers using OpenAI TTS. - `model`: "tts-1-hd" - The OpenAI TTS model to use. ### Edge TTS - `default_voices`: - `question`: "en-US-JennyNeural" - Default voice for questions using Edge TTS. - `answer`: "en-US-EricNeural" - Default voice for answers using Edge TTS. ### General TTS Settings - `audio_format`: "mp3" - Format of the generated audio files. - `temp_audio_dir`: "data/audio/tmp/" - Temporary directory for audio processing. - `ending_message`: "Tchau!" - Message to be appended at the end of the podcast. ## Content Generator - `gemini_model`: "gemini-1.5-pro-latest" - The Gemini AI model used for content generation. - `system_prompt_file`: "prompt.txt" - File containing the system prompt for content generation. - `max_output_tokens`: 8192 - Maximum number of tokens for the output generated by the AI model. - `temperature`: 0 - Controls randomness in the AI's output. 0 means deterministic responses. - `langchain_tracing_v2`: true - Enables LangChain tracing for debugging and monitoring. ## Content Extractor - `youtube_url_patterns`: - Patterns to identify YouTube URLs. - Current patterns: "youtube.com", "youtu.be" ## Website Extractor - `jina_api_url`: "https://r.jina.ai" - URL for the Jina API used in content extraction. - `markdown_cleaning`: - `remove_patterns`: - Patterns to remove from extracted markdown content. - Current patterns remove image links, hyperlinks, and URLs. ## YouTube Transcriber - `remove_phrases`: - Phrases to remove from YouTube transcriptions. - Current phrase: "[music]" ## Logging - `level`: "INFO" - Default logging level. - `format`: "%(asctime)s - %(name)s - %(levelname)s - %(message)s" - Format string for log messages. ## Main Settings - `default_tts_model`: "openai" - Default Text-to-Speech model to use when not specified.
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/config_custom.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/config_custom.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 2809 }
# Podcastfy Conversation Configuration Podcastfy offers a range of customization options to tailor your AI-generated podcasts. This document outlines how you can adjust parameters such as conversation style, word count, and dialogue structure to suit your specific needs. ## Table of Contents 1. [Parameters](#parameters) 2. [Customization Examples](#customization-examples) 1. [Academic Debate](#academic-debate) 2. [Storytelling Adventure](#storytelling-adventure) 3. [Customization Scenarios](#customization-scenarios) 1. [Using the Python Package](#using-the-python-package) 2. [Using the CLI](#using-the-cli) 4. [Notes of Caution](#notes-of-caution) ## Conversation Parameters Podcastfy uses the default conversation configuration stored in [podcastfy/conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml). | Parameter | Default Value | Type | Description | |-----------|---------------|------|-------------| | word_count | 2000 | int | Target word count for the generated content | | conversation_style | ["engaging", "fast-paced", "enthusiastic"] | list[str] | Styles to apply to the conversation | | roles_person1 | "main summarizer" | str | Role of the first speaker | | roles_person2 | "questioner/clarifier" | str | Role of the second speaker | | dialogue_structure | ["Introduction", "Main Content Summary", "Conclusion"] | list[str] | Structure of the dialogue | | podcast_name | "PODCASTFY" | str | Name of the podcast | | podcast_tagline | "YOUR PERSONAL GenAI PODCAST" | str | Tagline for the podcast | | output_language | "English" | str | Language of the output | | engagement_techniques | ["rhetorical questions", "anecdotes", "analogies", "humor"] | list[str] | Techniques to engage the audience | | creativity | 0 | int | Level of creativity/temperature (0-1) | | user_instructions | "" | str | Custom instructions to guide the conversation focus and topics | ## Text-to-Speech (TTS) Settings Podcastfy uses the default TTS configuration stored in [podcastfy/conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml). ### ElevenLabs TTS - `default_voices`: - `question`: "Chris" - Default voice for questions in the podcast. - `answer`: "Jessica" - Default voice for answers in the podcast. - `model`: "eleven_multilingual_v2" - The ElevenLabs TTS model to use. ### OpenAI TTS - `default_voices`: - `question`: "echo" - Default voice for questions using OpenAI TTS. - `answer`: "shimmer" - Default voice for answers using OpenAI TTS. - `model`: "tts-1-hd" - The OpenAI TTS model to use. ### Edge TTS - `default_voices`: - `question`: "en-US-JennyNeural" - Default voice for questions using Edge TTS. - `answer`: "en-US-EricNeural" - Default voice for answers using Edge TTS. ### General TTS Settings - `default_tts_model`: "openai" - Default text-to-speech model to use. - `output_directories`: - `transcripts`: "./data/transcripts" - Directory for storing generated transcripts. - `audio`: "./data/audio" - Directory for storing generated audio files. - `audio_format`: "mp3" - Format of the generated audio files. - `temp_audio_dir`: "data/audio/tmp/" - Temporary directory for audio processing. - `ending_message`: "Bye Bye!" - Message to be appended at the end of the podcast. ## Customization Examples These examples demonstrate how conversations can be altered to suit different purposes, from academic rigor to creative storytelling. The comments explain the rationale behind each choice, helping users understand how to tailor the configuration to their specific needs. ### Academic Debate This configuration transforms the podcast into a formal academic debate, encouraging deep analysis and critical thinking. It's designed for educational content or in-depth discussions on complex topics. ```python { "word_count": 3000, # Longer to allow for detailed arguments "conversation_style": ["formal", "analytical", "critical"], # Appropriate for academic discourse "roles_person1": "thesis presenter", # Presents the main argument "roles_person2": "counterargument provider", # Challenges the thesis "dialogue_structure": [ "Opening Statements", "Thesis Presentation", "Counterarguments", "Rebuttals", "Closing Remarks" ], # Mimics a structured debate format "podcast_name": "Scholarly Showdown", "podcast_tagline": "Where Ideas Clash and Knowledge Emerges", "engagement_techniques": [ "socratic questioning", "historical references", "thought experiments" ], # Techniques to stimulate critical thinking "creativity": 0 # Low creativity to maintain focus on facts and logic } ``` ### Storytelling Adventure This configuration turns the podcast into an interactive storytelling experience, engaging the audience in a narrative journey. It's ideal for fiction podcasts or creative content marketing. ```yaml word_count: 1000 # Shorter to maintain pace and suspense conversation_style: - narrative - suspenseful - descriptive # Creates an immersive story experience roles_person1: storyteller roles_person2: audience participator # Allows for interactive elements dialogue_structure: - Scene Setting - Character Introduction - Rising Action - Climax - Resolution # Follows classic storytelling structure podcast_name: Tale Spinners podcast_tagline: Where Every Episode is an Adventure engagement_techniques: - cliffhangers - vivid imagery - audience prompts # Keeps the audience engaged and coming back creativity: 0.9 # High creativity for unique and captivating stories ``` ## Customization Scenarios ### Using the Python Package When using the Podcastfy Python package, you can customize the conversation by passing a dictionary to the `conversation_config` parameter: ```python from podcastfy.client import generate_podcast custom_config = { "word_count": 200, "conversation_style": ["casual", "humorous"], "podcast_name": "Tech Chuckles", "creativity": 0.7 } generate_podcast( urls=["https://example.com/tech-news"], conversation_config=custom_config ) ``` ### Using the CLI When using the Podcastfy CLI, you can specify a path to a YAML file containing your custom configuration: ```bash podcastfy --url https://example.com/tech-news --conversation-config path/to/custom_config.yaml ``` The `custom_config.yaml` file should contain your configuration in YAML format: ```yaml word_count: 200 conversation_style: - casual - humorous podcast_name: Tech Chuckles creativity: 0.7 ``` ## Notes of Caution - The `word_count` is a target, and the AI may generate more or less than the specified word count. Low word counts are more likely to generate high-level discussions, while high word counts are more likely to generate detailed discussions. - The `output_language` defines both the language of the transcript and the language of the audio. Here's some relevant information: - Bottom-line: non-English transcripts are good enough but non-English audio is work-in-progress. - Transcripts are generated using Google's Gemini 1.5 Pro, which supports 100+ languages by default. - Audio is generated using `openai` (default), `elevenlabs`, `gemini`,or `edge` TTS models. - The `gemini`(Google) TTS model is English only. - The `openai` TTS model supports multiple languages automatically, however non-English voices still present sub-par quality in my experience. - The `elevenlabs` TTS model has English voices by default, in order to use a non-English voice you would need to download a custom voice for the target language in your `elevenlabs` account settings and then set the `text_to_speech.elevenlabs.default_voices` parameters to the voice you want to use in the [config.yaml file](https://github.com/pedroslopez/podcastfy/blob/main/podcastfy/config.yaml) (this config file is only available in the source code of the project, not in the pip package, hence if you are using the pip package you will not be able to change the ElevenLabs voice). For more information on ElevenLabs voices, visit [ElevenLabs Voice Library](https://elevenlabs.io/voice-library)
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/conversation_custom.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/conversation_custom.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 8308 }
# Docker Setup Guide for Podcastfy This guide explains how to use Docker to run Podcastfy in your local environment or for development. ## Prerequisites - Docker installed on your system [1] - Docker Compose [1] - API keys [2] [1] See Appendix A for detailed installation instructions. [2] See [config.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) for more details. ## Available Images Podcastfy provides pre-built Docker images through GitHub Container Registry (ghcr.io): 1. **Production Image**: `ghcr.io/souzatharsis/podcastfy:latest` - Contains the latest PyPI release - Recommended for production use 2. **Development Image**: `ghcr.io/souzatharsis/podcastfy:dev` - Includes development tools and dependencies - Used for contributing and development ## Deployment ### Quick Deployment Steps 1. Create a new directory and navigate to it: ```bash mkdir -p /path/to/podcastfy cd /path/to/podcastfy ``` 2. Create a `.env` file with your API keys (see [config.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) for more details): ```plaintext GEMINI_API_KEY=your_gemini_api_key OPENAI_API_KEY=your_openai_api_key # Optional: only needed for OpenAI TTS ``` 3. Create a `docker-compose.yml`: ```yaml version: '3.8' services: podcastfy: image: ghcr.io/souzatharsis/podcastfy:latest environment: - GEMINI_API_KEY=${GEMINI_API_KEY} - OPENAI_API_KEY=${OPENAI_API_KEY} ports: - "8000:8000" command: python3 -m podcastfy.server healthcheck: test: ["CMD", "python3", "-c", "import podcastfy"] interval: 30s timeout: 10s retries: 3 ``` 4. Pull and start the container: ```bash docker pull ghcr.io/souzatharsis/podcastfy:latest docker-compose up podcastfy ``` The service will be available at `http://localhost:8000` ### Directory Structure ``` /path/to/podcastfy/ ├── .env # Environment variables └── docker-compose.yml # Docker Compose configuration ``` ## Development Setup ### Using Pre-built Development Image 1. Pull the development image: ```bash docker pull ghcr.io/souzatharsis/podcastfy:dev ``` 2. Clone the repository and start development environment: ```bash git clone https://github.com/souzatharsis/podcastfy.git cd podcastfy docker-compose up podcastfy-dev ``` ### Building Locally Alternatively, you can build the images locally: ```bash # Build production image docker-compose build podcastfy # Build development image docker-compose build podcastfy-dev ``` ## Running Tests Run the test suite using: ```bash docker-compose up test ``` This will run tests in parallel using pytest-xdist. ## Environment Variables Required environment variables: - `GEMINI_API_KEY` - Your Google Gemini API key - `OPENAI_API_KEY` - Your OpenAI API key (optional: only needed for OpenAI TTS) ## Container Details ### Production Container - Based on Ubuntu 24.04 - Installs Podcastfy from PyPI - Includes FFmpeg for audio processing - Runs in a Python virtual environment - Exposed port: 8000 ### Development Container - Based on Ubuntu 24.04 - Includes development tools (flake8, pytest) - Mounts local code for live development - Runs in editable mode (`pip install -e .`) - Exposed port: 8001 ## Continuous Integration The Docker images are automatically: - Built and tested on every push to main branch - Built and tested for all pull requests - Published to GitHub Container Registry - Tagged with version numbers for releases (v*.*.*) ## Health Checks All services include health checks that: - Run every 30 seconds - Verify Podcastfy can be imported - Timeout after 10 seconds - Retry up to 3 times ## Common Commands ```bash # Pull latest production image docker pull ghcr.io/souzatharsis/podcastfy:latest # Pull development image docker pull ghcr.io/souzatharsis/podcastfy:dev # Start production service docker-compose up podcastfy # Start development environment docker-compose up podcastfy-dev # Run tests docker-compose up test # Build images locally docker-compose build # View logs docker-compose logs # Stop all containers docker-compose down ``` ## Troubleshooting ### Common Issues 1. **API Key Errors** - Verify your `.env` file exists and contains valid API keys - Check if the environment variables are properly passed to the container 2. **Port Conflicts** - Ensure ports 8000 (production) and 8001 (development) are available - Modify the port mappings in `docker-compose.yml` if needed 3. **Volume Mounting Issues (Development)** - Verify the correct path to your local code - Check permissions on the mounted directories 4. **Image Pull Issues** - Ensure you have access to the GitHub Container Registry - If you see "unauthorized" errors, the image might be private - Try authenticating with GitHub: `docker login ghcr.io -u YOUR_GITHUB_USERNAME` ### Verifying Installation You can verify your installation by checking if the package can be imported: ```bash # Check production version docker run --rm ghcr.io/souzatharsis/podcastfy:latest python3 -c "import podcastfy" # Check development setup docker-compose exec podcastfy-dev python3 -c "import podcastfy" ``` ## System Requirements Minimum requirements: - Docker Engine 20.10.0 or later - Docker Compose 2.0.0 or later - Sufficient disk space for Ubuntu base image (~400MB) - Additional space for Python packages and FFmpeg ## Support If you encounter any issues: 1. Check the container logs: `docker-compose logs` 2. Verify all prerequisites are installed 3. Ensure all required environment variables are set 4. Open an issue on the [Podcastfy GitHub repository](https://github.com/souzatharsis/podcastfy/issues) ## Appendix A: Detailed Installation Guide ### Installing Docker #### Windows 1. Download and install [Docker Desktop for Windows](https://docs.docker.com/desktop/install/windows-install/) - For Windows 10/11 Pro, Enterprise, or Education: Enable WSL 2 and Hyper-V - For Windows 10 Home: Enable WSL 2 2. After installation, start Docker Desktop 3. Verify installation: ```bash docker --version ``` #### macOS 1. Download and install [Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/) - For Intel chip: Download Intel package - For Apple chip: Download Apple Silicon package 2. After installation, start Docker Desktop 3. Verify installation: ```bash docker --version ``` #### Ubuntu/Debian ```bash # Remove old versions sudo apt-get remove docker docker-engine docker.io containerd runc # Install prerequisites sudo apt-get update sudo apt-get install \ ca-certificates \ curl \ gnupg \ lsb-release # Add Docker's official GPG key sudo mkdir -p /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg # Set up repository echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin # Add your user to docker group (optional, to run docker without sudo) sudo usermod -aG docker $USER newgrp docker # Verify installation docker --version ``` #### Other Linux Distributions - [CentOS](https://docs.docker.com/engine/install/centos/) - [Fedora](https://docs.docker.com/engine/install/fedora/) - [RHEL](https://docs.docker.com/engine/install/rhel/) ### Installing Docker Compose Docker Compose is included with Docker Desktop for Windows and macOS. For Linux: ```bash # Download the current stable release sudo curl -L "https://github.com/docker/compose/releases/download/v2.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # Apply executable permissions sudo chmod +x /usr/local/bin/docker-compose # Verify installation docker-compose --version ``` ### Post-Installation Steps 1. Verify Docker is running: ```bash docker run hello-world ``` 2. Configure Docker to start on boot (Linux only): ```bash sudo systemctl enable docker.service sudo systemctl enable containerd.service ``` ## Appendix B: Getting API Keys ### Google Gemini API Key 1. Visit [Google AI Studio](https://makersuite.google.com/app/apikey) 2. Create or sign in to your Google account 3. Click "Create API Key" 4. Copy and save your API key ### OpenAI API Key You only need an OpenAI API key if you want to use the OpenAI Text-to-Speech model. 1. Visit [OpenAI API Keys](https://platform.openai.com/api-keys) 2. Create or sign in to your OpenAI account 3. Click "Create new secret key" 4. Copy and save your API key ## Appendix C: Installation Validation After installing all prerequisites, verify everything is set up correctly: ```bash # Check Docker version docker --version # Check Docker Compose version docker-compose --version # Verify Docker daemon is running docker ps # Test Docker functionality docker run hello-world ```
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/docker.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/docker.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 9079 }
# How to All assume you have podcastfy installed and running. ## Table of Contents - [Custom LLM Support](#custom-llm-support) - [Running Local LLMs](#running-local-llms) - [How to use your own voice in audio podcasts](#how-to-use-your-own-voice-in-audio-podcasts) - [How to customize the conversation](#how-to-customize-the-conversation) - [How to generate multilingual content](#how-to-generate-multilingual-content) - [How to steer the conversation](#how-to-steer-the-conversation) ## Custom LLM Support Podcastfy offers a range of LLM models for generating transcripts including OpenAI, Anthropic, Google as well as local LLM models. ### Cloud-based LLMs By default, Podcastfy uses Google's `gemini-1.5-pro-latest` model. To select a particular cloud-based LLM model, users can pass the `llm_model_name` and `api_key_label` parameters to the `generate_podcast` function. For example, to use OpenAI's `gpt-4-turbo` model, users can pass `llm_model_name="gpt-4-turbo"` and `api_key_label="OPENAI_API_KEY"`. ```python audio_file = generate_podcast( urls=["https://en.wikipedia.org/wiki/Artificial_intelligence"], llm_model_name="gpt-4-turbo", api_key_label="OPENAI_API_KEY" ) ``` Remember to have the correct API key label and value in your environment variables (`.env` file). ### Running Local LLMs See [local_llm.md](local_llm.md) for more details. ## How to use your own voice in audio podcasts You just need to use ElevenLabs TSS backend and pass a custom config to use your voice instead of podcastfy's default: 1. Create elevenlabs account, get and [set up](https://github.com/souzatharsis/podcastfy/blob/main/usage/config.md) eleven labs API KEY 2. Clone your voice on elevenlabs website (let's say its name is 'Robbert') 4. Create custom conversation config (let's call it custom_config.yaml) to use your voice name instead of the default as described [here](https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md#text-to-speech-tts-settings). Set either question or answer voice below to 'Robbert' in elevenlabs > default_voices. 6. Run podcastfy with tts-model param as elevenlabs CLI ``` python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs --conversation-config path/to/custom_config.yaml ``` For Python example, checkout Customization section at [python notebook](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy.ipynb). ## How to customize the conversation You can customize the conversation by passing a custom [conversation_config.yaml](https://github.com/souzatharsis/podcastfy/blob/main/podcastfy/conversation_config.yaml) file to the CLI: ``` python -m podcastfy.client --url https://example.com/article1 --url https://example.com/article2 --tts-model elevenlabs --conversation-config path/to/custom_config.yaml ``` You can also pass a dictionary with the custom config to the python interface generate_podcast function: ```python from podcastfy.client import generate_podcast custom_config = { "word_count": 200, "conversation_style": ["casual", "humorous"], "podcast_name": "Tech Chuckles", "creativity": 0.7 } generate_podcast( urls=["https://example.com/tech-news"], conversation_config=custom_config ) ``` For more details, checkout [conversation_custom.md](https://github.com/souzatharsis/podcastfy/blob/main/usage/conversation_custom.md). ## How to generate multilingual content In order to generate transcripts in a target language, simply set `output_language` = your target language. See [How to customize the conversation](#how-to-customize-the-conversation) on how to pass custom configuration to podcastfy. Set --transcript-only to get only the transcript without audio generation. In order to generation audio, you can simply use openai TTS model which by default is multilingual. However, in my experience OpenAI's TTS multilingual quality is subpar. Instead, consdier using elevenlabs backend. See [How to use your own voice in audio podcasts](#how-to-use-your-own-voice-in-audio-podcasts) but instead of using your own voice you should download and set a voice in your target language for it to work. Sample audio: - [French](https://github.com/souzatharsis/podcastfy/blob/main/data/audio/podcast_FR_AGRO.mp3) - [Portugue-BR](https://github.com/souzatharsis/podcastfy/blob/main/data/audio/podcast_thatupiso_BR.mp3) The PT-BR audio actually uses my own cloned voice as AI Host 2. ## How to steer the conversation You can guide the conversation focus and topics by setting the `user_instructions` parameter in your custom configuration. This allows you to provide specific instructions to the AI hosts about what aspects they should emphasize or explore. Things to try: - Focus on a specific topic (e.g. "Focus the discussion on key capabilities and limitations of modern AI models") - Target a specific audience (e.g. "Explain concepts in a way that's accessible to someone new to Computer Science") For example, using the CLI with a custom YAML: ```yaml user_instructions: "Make connections with quantum computing" ``` ``` python -m podcastfy.client --url https://en.wikipedia.org/wiki/Artificial_intelligence --conversation-config path/to/custom_config.yaml ```
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/how-to.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/how-to.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 5293 }
Podcastfy is licensed under Apache 2.0. The Apache License 2.0 is a permissive free software license that allows you to use this sotfware for both non-commercial or commercial purposes. Please review the [License](../LICENSE) in order to know your obligations. here is a set of steps I will list without any warranty or liability: 1. Include a copy of the license in your project: In your project root, create a NOTICE.txt or THIRD_PARTY_LICENSES.txt file and include the content from the file [NOTICE](../NOTICE) 2. Add attribution in your README.md: ```markdown ## Acknowledgments This project includes code from Podcastfy(https://github.com/souzatharsis/podcastfy/), licensed under the Apache License 2.0. ``` 3. Keep the original copyright notices in any files you copy/modify 4. If you modified the code, indicate your changes: ```python # Modified from original source: [Podcastfy](https://github.com/souzatharsis/podcastfy/) # Changes made: # - Added feature X # - Modified function Y # - Removed component Z ``` Important points: - You don't need to use the same license for your project - You must preserve all copyright, patent, trademark notices - State significant modifications you made - Include the original Apache 2.0 license text - Attribution should be clear and reasonable
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/license-guide.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/license-guide.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1300 }
# Local LLM Support Running local LLMs can offer several advantages such as: - Enhanced privacy and data security - Cost control and no API rate limits - Greater customization and fine-tuning options - Reduced vendor lock-in We enable serving local LLMs with [llamafile](https://github.com/Mozilla-Ocho/llamafile). In the API, local LLM support is available through the `is_local` parameter. If `is_local=True`, then a local (llamafile) LLM model is used to generate the podcast transcript. Llamafiles of LLM models can be found on [HuggingFace, which today offers 156+ models](https://huggingface.co/models?library=llamafile). All you need to do is: 1. Download a llamafile from HuggingFace 2. Make the file executable 3. Run the file Here's a simple bash script that shows all 3 setup steps for running TinyLlama-1.1B locally: ```bash # Download a llamafile from HuggingFace wget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile # Make the file executable. On Windows, instead just rename the file to end in ".exe". chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile # Start the model server. Listens at http://localhost:8080 by default. ./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser ``` Now you can use the local LLM to generate a podcast transcript (or audio) by setting the `is_local` parameter to `True`. ## Python API ```python from podcastfy import generate_podcast # Generate a tech debate podcast about artificial intelligence generate_podcast( urls=["www.souzatharsis.com"], is_local=True # Using a local LLM ) ``` ## CLI To use a local LLM model via the command-line interface, you can use the `--local` or `-l` flag. Here's an example of how to generate a transcript using a local LLM: ```bash python -m podcastfy.client --url https://example.com/article1 --transcript-only --local ``` ## Notes of caution When using local LLM models versus widely known private large language models: 1. Performance: Local LLMs often have lower performance compared to large private models due to size and training limitations. 2. Resource requirements: Running local LLMs can be computationally intensive, requiring significant CPU/GPU resources. 3. Limited capabilities: Local models may struggle with complex tasks or specialized knowledge that larger models handle well. 5. Reduced multimodal abilities: Local LLMs will be assumed to be text-only capable 6. Potential instability: Local models may produce less consistent or stable outputs compared to well-tested private models oftentimes producing transcripts that cannot be used for podcast generation (TTS) out-of-the-box 7. Limited context window: Local models often have smaller context windows, limiting their ability to process long inputs. Always evaluate the trade-offs between using local LLMs and private models based on your specific use case and requirements. We highly recommend extensively testing your local LLM before productionizing an end-to-end podcast generation and/or manually checking the transcript before passing to TTS model.
{ "source": "souzatharsis/podcastfy", "title": "docs/source/usage/local_llm.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/docs/source/usage/local_llm.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 3123 }
Tharsis Souza, PhD Tharsis Souza is a computer scientist passionate about data-driven products. He is Senior Vice President of Product Management, Modeling Engineering at Two Sigma Investments and Lecturer at Columbia University, Faculty member of the MSc. in Applied Analytics program. Prior to Two Sigma, he spent 10+ years delivering new technology products in a variety of companies from start-ups to Fortune 500’s in the U.S., Brazil, and the U.K. He’s an author of scholarly publications and a regular speaker in academic and business conferences. He also enjoys mentoring under-represented students & working professionals. Tharsis holds a Ph.D. in Computer Science from UCL, University of London following an M.Phil. and M.Sc. in Computer Science and a B.Sc. in Computer Engineering. Selected Interviews and Talks Mentorship Spotlight: Tharsis Souza, Two Sigma FactSet Investment Process Symposium - Innovative Data Panel BattleFin Alternative Data - Interview Beryl Elites - The Disruptors in Investment Management
{ "source": "souzatharsis/podcastfy", "title": "tests/data/mock/website.md", "url": "https://github.com/souzatharsis/podcastfy/blob/main/tests/data/mock/website.md", "date": "2024-09-30T22:35:09", "stars": 3232, "description": "An Open Source Python alternative to NotebookLM's podcast feature: Transforming Multimodal Content into Captivating Multilingual Audio Conversations with GenAI", "file_size": 1023 }
# NewsNow ![](screenshots/preview-1.png) ![](screenshots/preview-2.png) English | [简体中文](README.zh-CN.md) ***Elegant reading of real-time and hottest news*** ## Features - Elegant design for a pleasant reading experience, keeping you up-to-date with the latest and hottest news. - Supports Github login and data synchronization. - Default cache duration is 30 minutes. Logged-in users can force fetch the latest data. However, the scraping interval is adjusted based on the update frequency of the content sources (as fast as every two minutes) to save resources and prevent frequent scraping that could lead to IP bans. ## Deployment If login and caching are not required, you can directly deploy to platforms like Cloudflare Pages or Vercel. Just fork the repository and import it into the respective platform. For Cloudflare Pages, you need to set the build command to `pnpm run build` and the build output directory to `dist/output/public`. For login, which involves GitHub OAuth, you only need to [create a GitHub App](https://github.com/settings/applications/new). No special permissions are required. The callback URL should be `https://your-domain.com/api/oauth/github` (replace `your-domain` with your actual domain). After creating the app, you will get a Client ID and Client Secret. Different platforms have different places to set environment variables; refer to the `example.env.server` file. If running locally, rename it to `.env.server` and add the necessary values. ```env # Github Client ID G_CLIENT_ID= # Github Client Secret G_CLIENT_SECRET= # JWT Secret, usually the same as Client Secret JWT_SECRET= # Initialize database, must be set to true on first run, can be turned off afterward INIT_TABLE=true # Whether to enable cache ENABLE_CACHE=true ``` This project primarily supports deployment on Cloudflare Pages and Docker. For Vercel, you need to set up your own database. Supported databases can be found at https://db0.unjs.io/connectors . The Cloudflare D1 database can be used for free. To set it up, go to the Cloudflare Worker control panel and manually create a D1 database. Then, add the `database_id` and `database_name` to the corresponding fields in your `wrangler.toml` file. If you don't have a `wrangler.toml` file, you can rename `example.wrangler.toml` to `wrangler.toml` and modify it with your configuration. The changes will take effect on your next deployment. For Docker deployment. In the project root directory with `docker-compose.yml`, run ```sh docker compose up ``` ## Development > [!TIP] > Node version >= 20 ```sh corepack enable pnpm i pnpm dev ``` If you want to add data sources, refer to the `shared/sources`, and `server/sources` directories. The project has complete types and a simple structure; feel free to explore. ## License [MIT](./LICENSE) © ourongxing
{ "source": "ourongxing/newsnow", "title": "README.md", "url": "https://github.com/ourongxing/newsnow/blob/main/README.md", "date": "2024-09-23T14:16:12", "stars": 3190, "description": "Elegant reading of real-time and hottest news", "file_size": 2840 }
# NewsNow ![](screenshots/preview-1.png) ![](screenshots/preview-2.png) [English](./README.md) | 简体中文 ***优雅地阅读实时热门新闻*** ## 特性 - 优雅的设计,优雅的阅读体验,时刻关注最新最热的新闻。 - 支持 Github 登录,支持数据同步。 - 默认 30 分钟缓存,登录用户可以强制拉取最新数据。但也会根据内容源的更新间隔设置不同的爬虫间隔时间(最快两分钟),节约资源的同时避免频繁爬取而导致 IP 封禁。 ## 部署 如果不需要登录,缓存,可以直接部署到 Cloudflare Pages,Vercel 等。Fork 之后在对应平台上导入即可。 Cloudflare Pages 需要填入构建命令 `pnpm run build`, 构建输出文件夹 `dist/output/public`。 登录涉及到 Github Oauth,只需要 [创建一个 Github App](https://github.com/settings/applications/new) 即可,不需要申请任何权限。Callback URL 为 `https://your-domain.com/api/oauth/github`。 然后就会得到 Client ID 和 Client Secret。关于环境变量,不同平台有不同的填写位置,请关注 `example.env.server` 文件。如果本地运行,需要将其重命名为 `.env.server`,然后按照要求添加。 ```env # Github Clien ID G_CLIENT_ID= # Github Clien Secret G_CLIENT_SECRET= # JWT Secret, 通常就用 Clien Secret JWT_SECRET= # 初始化数据库, 首次运行必须设置为 true,之后可以将其关闭 INIT_TABLE=true # 是否启用缓存 ENABLE_CACHE=true ``` 本项目主推 Cloudflare Pages 以及 Docker 部署, Vercel 需要你自行搞定数据库,其他支持的数据库可以查看 https://db0.unjs.io/connectors 。 Cloudflare D1 数据库可以免费使用,在 Cloudflare Worker 控制面板里找到 D1 手动创建数据库,将 `database_id` 以及 `database_name` 填入 `wrangler.toml` 对应位置即可。没有 `wrangler.toml` 文件,可以把 `example.wrangler.toml` 重命名为 `wrangler.toml`, 将其修改为自己的配置,下次部署时就可以生效了。 对于 Docker 部署,只需要项目根目录 `docker-compose.yaml` 文件,同一目录下执行 ``` docker compose up ``` ## 开发 > [!TIP] > node version >= 20 ```bash corepack enable pnpm i pnpm dev ``` 你可能想要添加数据源,请关注 `shared/sources` `server/sources`,项目类型完备,结构简单,请自行探索。 ## License [MIT](./LICENSE) © ourongxing ## 赞赏 如果本项目对你有所帮助,可以给小猫买点零食。如果需要定制或者其他帮助,请通过下列方式联系备注。 ![](./screenshots/reward.gif)
{ "source": "ourongxing/newsnow", "title": "README.zh-CN.md", "url": "https://github.com/ourongxing/newsnow/blob/main/README.zh-CN.md", "date": "2024-09-23T14:16:12", "stars": 3190, "description": "Elegant reading of real-time and hottest news", "file_size": 1583 }
# Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at <[email protected]>. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] [homepage]: http://contributor-covenant.org [version]: http://contributor-covenant.org/version/1/4/
{ "source": "github/CopilotForXcode", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/github/CopilotForXcode/blob/main/CODE_OF_CONDUCT.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 3230 }
# Development ## Prerequisites Requires Node installed and `npm` available on your system path, e.g. ```sh sudo ln -s `which npm` /usr/local/bin sudo ln -s `which node` /usr/local/bin ``` For context, this is used by an Xcode run script as part of the build. Run scripts use a very limited path to resolve commands. ## Targets ### Copilot for Xcode Copilot for Xcode is the host app containing both the XPCService and the editor extension. It provides the settings UI. ### EditorExtension As its name suggests, the Xcode source editor extension. Its sole purpose is to forward editor content to the XPCService for processing, and update the editor with the returned content. Due to the sandboxing requirements for editor extensions, it has to communicate with a trusted, non-sandboxed XPCService (CommunicationBridge and ExtensionService) to bypass the limitations. The XPCService service name must be included in the `com.apple.security.temporary-exception.mach-lookup.global-name` entitlements. ### ExtensionService The `ExtensionService` is a program that operates in the background. All features are implemented in this target. ### CommunicationBridge It's responsible for maintaining the communication between the Copilot for Xcode/EditorExtension and ExtensionService. ### Core and Tool Most of the logics are implemented inside the package `Core` and `Tool`. - The `Service` contains the implementations of the ExtensionService target. - The `HostApp` contains the implementations of the Copilot for Xcode target. ## Building and Archiving the App 1. Update the xcconfig files, bridgeLaunchAgent.plist, and Tool/Configs/Configurations.swift. 2. Build or archive the Copilot for Xcode target. 3. If Xcode complains that the pro package doesn't exist, please remove the package from the project. ## Testing Source Editor Extension Just run both the `ExtensionService`, `CommunicationBridge` and the `EditorExtension` Target. Read [Testing Your Source Editor Extension](https://developer.apple.com/documentation/xcodekit/testing_your_source_editor_extension) for more details. ## SwiftUI Previews Looks like SwiftUI Previews are not very happy with Objective-C packages when running with app targets. To use previews, please switch schemes to the package product targets. ## Unit Tests To run unit tests, just run test from the `Copilot for Xcode` target. For new tests, they should be added to the `TestPlan.xctestplan`. ## Code Style We use SwiftFormat to format the code. The source code mostly follows the [Ray Wenderlich Style Guide](https://github.com/raywenderlich/swift-style-guide) very closely with the following exception: - Use the Xcode default of 4 spaces for indentation. ## App Versioning The app version and all targets' version in controlled by `Version.xcconfig`.
{ "source": "github/CopilotForXcode", "title": "DEVELOPMENT.md", "url": "https://github.com/github/CopilotForXcode/blob/main/DEVELOPMENT.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 2818 }
# <img align="center" height="70" src="./Docs/AppIcon.png"/> GitHub Copilot for Xcode [GitHub Copilot](https://github.com/features/copilot) is an AI pair programmer tool that helps you write code faster and smarter. Copilot for Xcode is an Xcode extension that provides inline coding suggestions as you type and a chat assistant to answer your coding questions. ## Chat [Preview] GitHub Copilot Chat provides suggestions to your specific coding tasks via chat. <img alt="Chat of GitHub Copilot for Xcode" src="./Docs/chat_dark.gif" width="800" /> ## Code Completion You can receive auto-complete type suggestions from GitHub Copilot either by starting to write the code you want to use, or by writing a natural language comment describing what you want the code to do. <img alt="Code Completion of GitHub Copilot for Xcode" src="./Docs/demo.gif" width="800" /> ## Preview Policy Use of the GitHub Copilot Xcode Extension is subject to [GitHub's Pre-Release Terms](https://docs.github.com/en/site-policy/github-terms/github-pre-release-license-terms). We want to remind you that: > Previews may not be supported or may change at any time. You may receive confidential information through those programs that must remain confidential while the program is private. We'd love your feedback to make our Previews better. ## Requirements - macOS 12+ - Xcode 8+ - A GitHub Copilot subscription. To learn more, visit [https://github.com/features/copilot](https://github.com/features/copilot). ## Getting Started 1. Install via [Homebrew](https://brew.sh/): ```sh brew install --cask github-copilot-for-xcode ``` Or download the `dmg` from [the latest release](https://github.com/github/CopilotForXcode/releases/latest/download/GitHubCopilotForXcode.dmg). Drag `GitHub Copilot for Xcode` into the `Applications` folder: <p align="center"> <img alt="Screenshot of opened dmg" src="./Docs/dmg-open.png" width="512" /> </p> Updates can be downloaded and installed by the app. 1. Open the `GitHub Copilot for Xcode` application (from the `Applications` folder). Accept the security warning. <p align="center"> <img alt="Screenshot of MacOS download permission request" src="./Docs/macos-download-open-confirm.png" width="350" /> </p> 1. A background item will be added to enable Copilot to start when `GitHub Copilot for Xcode` is opened. <p align="center"> <img alt="Screenshot of background item" src="./Docs/background-item.png" width="370" /> </p> 1. Two permissions are required: `Accessibility` and `Xcode Source Editor Extension`. For more on why these permissions are required see [TROUBLESHOOTING.md](./TROUBLESHOOTING.md). The first time the application is run the `Accessibility` permission should be requested: <p align="center"> <img alt="Screenshot of accessibility permission request" src="./Docs/accessibility-permission-request.png" width="529" /> </p> The `Xcode Source Editor Extension` permission needs to be enabled manually. Click `Extension Permission` from the `GitHub Copilot for Xcode` application settings to open the System Preferences to the `Extensions` panel. Select `Xcode Source Editor` and enable `GitHub Copilot`: <p align="center"> <img alt="Screenshot of extension permission" src="./Docs/extension-permission.png" width="582" /> </p> 1. After granting the extension permission, open Xcode. Verify that the `Github Copilot` menu is available and enabled under the Xcode `Editor` menu. <br> <p align="center"> <img alt="Screenshot of Xcode Editor GitHub Copilot menu item" src="./Docs/xcode-menu.png" width="648" /> </p> Keyboard shortcuts can be set for all menu items in the `Key Bindings` section of Xcode preferences. 1. To sign into GitHub Copilot, click the `Sign in` button in the settings application. This will open a browser window and copy a code to the clipboard. Paste the code into the GitHub login page and authorize the application. <p align="center"> <img alt="Screenshot of sign-in popup" src="./Docs/device-code.png" width="372" /> </p> 1. To install updates, click `Check for Updates` from the menu item or in the settings application. After installing a new version, Xcode must be restarted to use the new version correctly. New versions can also be installed from `dmg` files downloaded from the releases page. When installing a new version via `dmg`, the application must be run manually the first time to accept the downloaded from the internet warning. 1. To avoid confusion, we recommend disabling `Predictive code completion` under `Xcode` > `Preferences` > `Text Editing` > `Editing`. 1. Press `tab` to accept the first line of a suggestion, hold `option` to view the full suggestion, and press `option` + `tab` to accept the full suggestion. <p align="center"> <img alt="Screenshot of welcome screen" src="./Docs/welcome.png" width="672" /> </p> ## How to use Chat [Preview] Open Copilot Chat in GitHub Copilot. - Open via the Xcode menu `Xcode -> Editor -> GitHub Copilot -> Open Chat`. <p align="center"> <img alt="Screenshot of Xcode Editor GitHub Copilot menu item" src="./Docs/xcode-menu_dark.png" width="648" /> </p> - Open via GitHub Copilot app menu `Open Chat`. <p align="center"> <img alt="Screenshot of GitHub Copilot menu item" src="./Docs/copilot-menu_dark.png" width="182" /> </p> ## How to use Code Completion Press `tab` to accept the first line of a suggestion, hold `option` to view the full suggestion, and press `option` + `tab` to accept the full suggestion. ## License This project is licensed under the terms of the MIT open source license. Please refer to [LICENSE.txt](./LICENSE.txt) for the full terms. ## Privacy We follow responsible practices in accordance with our [Privacy Statement](https://docs.github.com/en/site-policy/privacy-policies/github-privacy-statement). To get the latest security fixes, please use the latest version of the GitHub Copilot for Xcode. ## Support We’d love to get your help in making GitHub Copilot better! If you have feedback or encounter any problems, please reach out on our [Feedback forum](https://github.com/orgs/community/discussions/categories/copilot). ## Acknowledgements Thank you to @intitni for creating the original project that this is based on. Attributions can be found under About when running the app or in [Credits.rtf](./Copilot%20for%20Xcode/Credits.rtf).
{ "source": "github/CopilotForXcode", "title": "README.md", "url": "https://github.com/github/CopilotForXcode/blob/main/README.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 6548 }
Thanks for helping make GitHub safe for everyone. # Security GitHub takes the security of our software products and services seriously, including all of the open source code repositories managed through our GitHub organizations, such as [GitHub](https://github.com/GitHub). Even though [open source repositories are outside of the scope of our bug bounty program](https://bounty.github.com/index.html#scope) and therefore not eligible for bounty rewards, we will ensure that your finding gets passed along to the appropriate maintainers for remediation. ## Reporting Security Issues If you believe you have found a security vulnerability in any GitHub-owned repository, please report it to us through coordinated disclosure. **Please do not report security vulnerabilities through public GitHub issues, discussions, or pull requests.** Instead, please send an email to opensource-security[@]github.com. Please include as much of the information listed below as you can to help us better understand and resolve the issue: * The type of issue (e.g., buffer overflow, SQL injection, or cross-site scripting) * Full paths of source file(s) related to the manifestation of the issue * The location of the affected source code (tag/branch/commit or direct URL) * Any special configuration required to reproduce the issue * Step-by-step instructions to reproduce the issue * Proof-of-concept or exploit code (if possible) * Impact of the issue, including how an attacker might exploit the issue This information will help us triage your report more quickly. ## Policy See [GitHub's Safe Harbor Policy](https://docs.github.com/en/site-policy/security-policies/github-bug-bounty-program-legal-safe-harbor#1-safe-harbor-terms)
{ "source": "github/CopilotForXcode", "title": "SECURITY.md", "url": "https://github.com/github/CopilotForXcode/blob/main/SECURITY.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 1743 }
# Support ## How to get help We’d love to get your help in making GitHub Copilot better! If you have feedback or encounter any problems, please reach out on our [Feedback forum](https://github.com/orgs/community/discussions/categories/copilot). GitHub Copilot for Xcode is under active development and maintained by GitHub staff. We will do our best to respond to support, feature requests, and community questions in a timely manner. ## GitHub Support Policy GitHub Copilot for Xcode is considered a Beta Preview under the [GitHub Terms of Service](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service#j-beta-previews). Once GitHub Copilot for Xcode is generally available, it will be subject to the [GitHub Additional Product Terms](https://docs.github.com/en/site-policy/github-terms/github-terms-for-additional-products-and-features).
{ "source": "github/CopilotForXcode", "title": "SUPPORT.md", "url": "https://github.com/github/CopilotForXcode/blob/main/SUPPORT.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 869 }
# Troubleshooting for Copilot for Xcode If you are having trouble with Copilot for Xcode follow these steps to resolve common issues: 1. Check for updates and restart Xcode. Ensure that Copilot for Xcode has the [latest release](https://github.com/github/CopilotForXcode/releases/latest) by click `Check for Updates` in the settings or under the status menu. After updating, restart Xcode. 2. Ensure that the Copilot for Xcode extension is enabled. Open Xcode and go to the top menu bar and open the `Editor` menu. If there is no `GitHub Copilot` menu is under `Editor` then [extension permission](#extension-permission) needs to be enabled. If the `GitHub Copilot` menu is shown but grayed out, then Xcode needs to be restarted to enable the extension. 3. Need more help? If these steps don't resolve the issue, please [open an issue](https://github.com/github/CopilotForXcode/issues/new/choose). Make sure to [include logs](#logs) and any other relevant information. ## Extension Permission GitHub Copilot for Xcode is an Xcode Source Editor extension and requires the extension to be enabled. In the Copilot for Xcode settings, clicking `Extension Permission` will open the System Settings to the Extensions page where `GitHub Copilot` can be enabled under `Xcode Source Editor`. Or you can navigate to the permission manually depending on your OS version: | macOS | Location | | :--- | :--- | | 15 | System Settings > General > Login Items > Extensions > Xcode Source Editor | | 13 & 14 | System Settings > Privacy & Security > Extensions > Xcode Source Editor | | 12 | System Preferences > Extensions | ## Accessibility Permission GitHub Copilot for Xcode requires the accessibility permission to receive real-time updates from the active Xcode editor. [The XcodeKit API](https://developer.apple.com/documentation/xcodekit) enabled by the Xcode Source Editor extension permission only provides information when manually triggered by the user. In order to generate suggestions as you type, the accessibility permission is used read the Xcode editor content in real-time. The accessibility permission is also used to accept suggestions when `tab` is pressed. The accessibility permission is __not__ used to read or write to any applications besides Xcode. There are no granular options for the permission, but you can audit the usage in this repository: search for `CGEvent` and `AX`*. Enable in System Settings under `Privacy & Security` > `Accessibility` > `GitHub Copilot for Xcode Extension` and turn on the toggle. ## Logs Logs can be found in `~/Library/Logs/GitHubCopilot/` the most recent log file is: ``` ~/Library/Logs/GitHubCopilot/github-copilot-for-xcode.log ``` To enable verbose logging, open the GitHub Copilot for Xcode settings and enable `Verbose Logging` in the `Advanced` tab. After enabling verbose logging, restart Copilot for Xcode for the change to take effect.
{ "source": "github/CopilotForXcode", "title": "TROUBLESHOOTING.md", "url": "https://github.com/github/CopilotForXcode/blob/main/TROUBLESHOOTING.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 2935 }
At the moment we are not accepting contributions to the repository.
{ "source": "github/CopilotForXcode", "title": ".github/pull_request_template.md", "url": "https://github.com/github/CopilotForXcode/blob/main/.github/pull_request_template.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 67 }
--- name: Bug report about: Create a report to help us improve GitHub Copilot for Xcode --- <!-- Please search existing issues to avoid creating duplicates --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> **Versions** - Copilot for Xcode: [e.g. 0.25.0] - Xcode: [e.g. 16.0] - macOS: [e.g. 14.6.1] **Steps to reproduce** 1. 2. **Screenshots** <!-- Add screenshots or screen recordings to help explain your problem. --> **Logs** <!-- Attach relevant logs from `~/Library/Logs/GitHubCopilot/` --> **Additional context** <!-- Add any other context about the problem here. -->
{ "source": "github/CopilotForXcode", "title": ".github/ISSUE_TEMPLATE/bug_report.md", "url": "https://github.com/github/CopilotForXcode/blob/main/.github/ISSUE_TEMPLATE/bug_report.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 617 }
--- name: Feature request about: Suggest an idea for GitHub Copilot for Xcode --- <!-- Please search existing issues to avoid creating duplicates --> <!-- Describe the feature you'd like. -->
{ "source": "github/CopilotForXcode", "title": ".github/ISSUE_TEMPLATE/feature_request.md", "url": "https://github.com/github/CopilotForXcode/blob/main/.github/ISSUE_TEMPLATE/feature_request.md", "date": "2024-09-19T17:48:15", "stars": 3185, "description": "Xcode extension for GitHub Copilot", "file_size": 193 }
# Micro Racer Car - StuckAtPrototype *thank you for the overwhelmingly positve support!! such an inspiration to keep making fun open source content!* ## Youtube Video A bit of a back-story of how this project came to be. [![My Kickstarter failed, so I open sourced it](https://img.youtube.com/vi/6jzG-BMannc/0.jpg)](https://www.youtube.com/watch?v=6jzG-BMannc) Sub if you like what you see. *Some pictures of the project* ![Picture of PCB car](project_pictures/242A0548.png) ![Picture of PCB car](project_pictures/242A1274.png) ## Hardware Remote for this car https://github.com/StuckAtPrototype/Thumbtroller ## Project Structure The project consists of 1. Firmware 2. Hardware 3. Mechanical 4. Scripts These are structured into their own files. I could have used submodules, but decided against it. ### 1. Firmware Code for the little car. This lives on the ESP32 #### Requirements - ESP32 IDF version 5.3.1 - USB to Serial dongle - Target set to ESP32-H2 ### 2. Hardware #### Schematic PDF schematic included for your viewing pleasure. #### PCBs All the gerber files you'd need to send to a fab house. #### Kicad All the files you'd need to expand and work on this further. If you'd like. ### 3. Mechanical #### Enclosure All the step files you need to make one of these. Extrusion printer works well for this part. #### Wheels A bit of caution on this one.. you'll need an SLA printer. ### 4. Scripts Did anyone say neural networks? This folder has all the python code you'd need to train up your own neural network for the car. It also consists scripts that let you drive it using a keyboard -- just in case you dont want to make a physical controller. #### Requirements - Python 3 - You'll need to install a bunch of pip modules #### Training the neural network Training the neural network is as simple as running the training script with the data in the `color_data.txt` file. For data format see the sample data in the file. You need to stick to the formatting. To train run `python trainer.py` #### Keyboard controller To run the script `python controller.py` Use `w` `s` `a` `d` for control. Modify the script for different speeds, etc *Protocol for motor control* 60,1,60,1,5 -- translates to: motor side A: speed 60, direction forward motor side B: speed 60, direction forward 500 miliseconds run time See firmware file `motor.c` if you need more details ## What the project could use 1. Cleanup, but thats true for almost anything out there 2. Some fun code that makes the little car drive using the color sensor -- think very fancy line follower 3. LLM integration -- ChatGPT driving a physical little robot? anyone? :) ## If you take it further Let me know if you ever make one of these, I'd love to see it. Seriously, that'd be exciting and inspiring to keep making my projects open source! --- ## License ### Apache 2.0 -- i.e. use as you'd like http://www.apache.org/licenses/LICENSE-2.0 --- ## Special Thanks Thanks to Michael Angerer for his open sourced `esp32_ble_ota` project. I used it to get BLE running in this project. His blog post and github repo are a great resource. Check it out. https://github.com/michael-angerer/esp32_ble_ota
{ "source": "StuckAtPrototype/Racer", "title": "README.md", "url": "https://github.com/StuckAtPrototype/Racer/blob/master/README.md", "date": "2024-10-15T03:32:13", "stars": 3088, "description": null, "file_size": 3212 }
## trainer.py #### to run, simply call `python trainer.py` trainer.py is the training algorithm for the neural network embedded within the Racer. To use, `trainer.py` ensure you have classified data in `color_data.txt` Here is the format: ``` W (17447) main: Color values - Red: 1822, Green: 2184, Blue: 1762, Clear: 2008, Color: White W (9847) main: Color values - Red: 220, Green: 472, Blue: 124, Clear: 1780, Color: Black W (16447) main: Color values - Red: 488, Green: 748, Blue: 196, Clear: 1620, Color: Red W (19947) main: Color values - Red: 428, Green: 1368, Blue: 336, Clear: 2148, Color: Green ``` it requires all colors to be present to train. 200 samples of each color seems to be ok. ## controller.py controller.py is a simple BLE script that accepts keyboard input and relays it to the Racer. Its great for debugging. #### to run, simply call `python controller.py`
{ "source": "StuckAtPrototype/Racer", "title": "scripts/README.md", "url": "https://github.com/StuckAtPrototype/Racer/blob/master/scripts/README.md", "date": "2024-10-15T03:32:13", "stars": 3088, "description": null, "file_size": 890 }
<div align="center"> # SkyThought [![Github](https://img.shields.io/badge/SkyThought-000000?style=for-the-badge&logo=github&logoColor=000&logoColor=white)](https://github.com/NovaSky-AI/SkyThought) [![Twitter](https://img.shields.io/badge/NovaSky-white?style=for-the-badge&logo=X&logoColor=000&color=000&labelColor=white)](https://x.com/NovaSkyAI) [![Hugging Face Collection](https://img.shields.io/badge/NovaSky-fcd022?style=for-the-badge&logo=huggingface&logoColor=000&labelColor)](https://huggingface.co/NovaSky-AI) [![Discord](https://img.shields.io/badge/NovaSky-5865F2?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/RBAjeWSA) <div align="center" style="font-family: Arial, sans-serif;"> <p> <a href="#news" style="text-decoration: none; font-weight: bold;">News</a> • <a href="#links" style="text-decoration: none; font-weight: bold;">Links</a> • <a href="#getting-started" style="text-decoration: none; font-weight: bold;">Getting Started</a> • <a href="#evaluation" style="text-decoration: none; font-weight: bold;">Evaluation</a> • <a href="#citation" style="text-decoration: none; font-weight: bold;">Citation</a> • <a href="#acknowledgement" style="text-decoration: none; font-weight: bold;">Acknowledgement</a> </p> </div> </div> # News - **[2025/02/21]** 🎉 We released S*: Test time scaling for code generation ([paper](https://arxiv.org/pdf/2502.14382), [code](https://github.com/NovaSky-AI/SkyThought/tree/main/skythought/test-time-scaling)), a simple and extensible test time scaling framework for code generation. - **[2025/02/11]** 🎉 We released Sky-T1-7B ([model](https://huggingface.co/NovaSky-AI/Sky-T1-7B)) and Sky-T1-mini ([model](https://huggingface.co/NovaSky-AI/Sky-T1-mini)) to demonstrate the potential of RL in further enhancing model's capability beyond distillation. - **[2025/01/23]** ⚡️ We released Sky-T1-32B-Flash ([model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash), [data](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_preference_data_10k)) to tackle overthinking and reduce reasoning sequence lengths while maintaining accuracy. - **[2025/01/19]** 🎉 [Chat demo](http://164.152.23.196:3000/) for Sky-T1-32B-Preview is alive! Please check it out! - **[2025/01/10]** 🎉 We have released our Sky-T1-32B-Preview [model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) and [data](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_data_17k) through [HuggingFace](https://huggingface.co/NovaSky-AI)! # Links - 📜 [Sky-T1-7B and Sky-T1-mini Blog Post](https://novasky-ai.github.io/posts/sky-t1-7B/) - 📜 [Sky-T1-32B-Flash Blog Post](https://novasky-ai.github.io/posts/reduce-overthinking/) - 📜 [Sky-T1-32B-Preview model Blog Post](https://novasky-ai.github.io/posts/sky-t1/) - 🤗 [Sky-T1-32B-Preview model](https://huggingface.co/NovaSky-AI) # Getting Started We open source the code and scripts we used for data curation, training, and evaluation for Sky-T1-32B-Preview, you can find more details in each directory. - [`recipes`](./recipes/): Recipes - data curation steps and training strategies - for building our models `Sky-T1-32B-Flash`, `Sky-T1-32B-Preview` and `Sky-T1-7B` series. - [`skythought/evals`](./skythought/evals/): Our data generation and evaluation library. - [`skythought/train`](./skythought/train/): Training scripts for Sky-T1. We use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to perform training. - [`skythought/skythought-rl`](./skythought/skythought-rl/): RL training code for Sky-T1-7B and Sky-T1-mini. # Evaluation ## Usage You can install the latest release from PyPI or from [source](#installing-from-source): ```shell pip install skythought ``` ### Installing from source ```shell # Clone the repository git clone https://github.com/NovaSky-AI/SkyThought.git cd SkyThought # Create and activate a virtual environment (using uv here) uv venv --python 3.10 source .venv/bin/activate # Install the package in editable mode uv pip install -e . ``` Running evaluation is as simple as: ```bash skythought evaluate --model NovaSky-AI/Sky-T1-32B-Preview --task aime24 ``` We support a wide variety of datasets in mathematics, science and coding: - AIME'24 - MATH500 - GPQADiamond - MMLU - ARC-Challenge - OlympiadBench - AMC'23 - TACO - APPS - LiveCodeBench - MMLU Pro - MinervaMath - GSM8K - AIME'25 For more details, please refer to our [evaluation guide](examples/evaluate.ipynb) and the [evaluation README](skythought/evals/README.md). ### Evaluation results Following, we show our evaluation results for the Sky-T1-32B-Preview model across math, coding, and science benchmarks. | Metric | Sky-T1-32B-Preview | Qwen-2.5-32B-Instruct | QwQ | o1-preview | |-----------------------|---------------------|--------|-------|------------| | Math500 | 86.4 | 81.4 | 92.2 | 81.4 | | AIME2024 | 43.3 | 16.7 | 50.0 | 40.0 | | LiveCodeBench-Easy | 86.3 | 84.6 | 90.7 | 92.9 | | LiveCodeBench-Medium | 56.8 | 40.8 | 56.3 | 54.9 | | LiveCodeBench-Hard | 17.9 | 9.8 | 17.1 | 16.3 | | GPQA-Diamond | 56.8 | 45.5 | 52.5 | 75.2 | | OlympiadBench (Math, EN) | 59.79 | 46.74 | 62.17 | 59.2 | #### Results on non-reasoning benchmarks We also evaluate on non-reasoning benchmarks (these are benchmarks for instruction-following, QA, etc) to test whether the model has traded-off capability in other domains for better performance in reasoning-related benchmarks. | Metric | Sky-T1-32B-Preview | Qwen-2.5-32B-Instruct | QwQ-32B-Preview | Eval Implementation | |---------|-------------------|---------------------|-----------------|-------------------| | MMLU (0 shot; no CoT) | **78.36** | 74.14 | 71.23 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) | | MMLU (5 shot; no CoT) | 82.46 | **82.62** | 82.32 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) | | ARC-C (0 shot; no CoT) | **49.49** | 49.4 | 49.66 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) | | IFEval | 75.79 | **78.74** | 42.51 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) | | LLM-as-a-Judge | 9.12 | **9.19** | 8.30 | [fastchat](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) | | MGSM (0 shot; `direct`) | 33 | **42.3** | 19.07 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) | | MGSM (8-shot; `direct`) | 58.4 | **61.47** | 58.5 | [lm_eval](https://github.com/EleutherAI/lm-evaluation-harness) | | BFCL-v3 | 53.18 | **58.92** | 17.41 | [BFCL](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard) | | Arena-Hard | **74.79** | 66.51 | 52.6 | [Arena-Hard-Auto](https://github.com/lmarena/arena-hard-auto) | For more details, refer [here](./skythought/evals/base_instruct_evals.md). ## Fully Open-source: Driving Progress Together We believe that open-source collaboration drives progress, and with Sky-T1-32B-Preview, we are fully committed to empowering the community. We open-source all details (i.e., data, codes, model weights) to enable the community to replicate and improve on our results *easily*: <table> <thead> <tr> <th>Model</th> <th style="background-color: #f2f2f2;"><div align="center">Sky-T1-32B-Preview</div></th> <th><div align="center">STILL-2</div></th> <th><div align="center">Journey</div></th> <th><div align="center">QwQ</div></th> <th><div align="center">o1</div></th> </tr> </thead> <tbody> <tr> <td>Data</td> <td style="background-color: #f2f2f2;"><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> </tr> <tr> <td>Code</td> <td style="background-color: #f2f2f2;"><div align="center">✅</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> </tr> <tr> <td>Report</td> <td style="background-color: #f2f2f2;"><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> </tr> <tr> <td>Math domain</td> <td style="background-color: #f2f2f2;"><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">✅</div></td> </tr> <tr> <td>Coding domain</td> <td style="background-color: #f2f2f2;"><div align="center">✅</div></td> <td><div align="center">❌</div></td> <td><div align="center">❌</div></td> <td><div align="center">✅</div></td> <td><div align="center">✅</div></td> </tr> <tr> <td>Model Weights</td> <td style="background-color: #f2f2f2;"><div align="center">✅</div></td> <td><div align="center">✅</div></td> <td><div align="center">❌</div></td> <td><div align="center">✅</div></td> <td><div align="center">❌</div></td> </tr> </tbody> </table> # Citation The code in this repository is mostly described in the post below. Please consider citing this work if you find the repository helpful. ```bibtex @misc{sky_t1_2025, author = {NovaSky Team}, title = {Sky-T1: Train your own O1 preview model within $450}, howpublished = {https://novasky-ai.github.io/posts/sky-t1}, note = {Accessed: 2025-01-09}, year = {2025} } ``` # Acknowledgement This work is done at [Berkeley Sky Computing Lab](https://sky.cs.berkeley.edu/), with the amazing compute support from [Lambda Labs](https://lambdalabs.com/service/gpu-cloud?srsltid=AfmBOop5FnmEFTkavVtdZDsLWvHWNg6peXtat-OXJ9MW5GMNsk756PE5), [Anyscale](https://www.anyscale.com/), and [Databricks](https://www.databricks.com/). We would like to express our gratitude for the valuable academic feedback and support from the [Still-2 Team](https://arxiv.org/pdf/2412.09413), and Junyang Lin from the [Qwen Team](https://qwenlm.github.io/).
{ "source": "NovaSky-AI/SkyThought", "title": "README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 10427 }
# Sky-T1-7B For a detailed training recipes and technical details, refer to the [blog](https://novasky-ai.github.io/posts/sky-t1-7b/) ## SFT: Step 1 and Step 3 SFT ### Distillation Data Mixture Make sure you have installed the `skythought` package as outlined in the [README.md](/README.md#usage). All the data curation commands are provided from the root directory of the repo. For the distillation performed in step 1 and step 3, we use the following script for data generation. Replace the `$MODEL_NAME` with the model to be distilled from. For each subset (`numina_math`, `numina_olympaids`, etc), the `score` command requires the output directory from the previous `generate` command. ```shell skythought generate \ --task numina_math \ --model $MODEL_NAME \ --backend vllm \ --backend-args tensor_parallel_size=4 \ --sampling-params max_tokens=16384 \ --result-dir ./data skythought score \ --task numina_math \ --run-dir <path to output folder from generate> ``` ```shell skythought generate \ --task numina_olympiads \ --model $MODEL_NAME \ --backend vllm \ --backend-args tensor_parallel_size=4 \ --sampling-params max_tokens=16384 \ --end 40000 \ --result-dir ./data skythought score \ --task numina_olympiads \ --run-dir <path to output folder from generate> ``` ```shell skythought generate \ --task numina_amc_aime \ --model $MODEL_NAME \ --backend vllm \ --backend-args tensor_parallel_size=4 \ --sampling-params max_tokens=16384 \ --result-dir ./data skythought score \ --task numina_math \ --run-dir <path to output folder from generate> ``` For step 1 and step 3 SFT, follow the instructions in `skythought/train`. ## RL: Step 2 and Step 4 RL For RL training, install our modified fork of [VeRL](https://github.com/volcengine/verl) under `skythought/skythought-rl` and follow the instructions there, we also incorporate the math and coding testing utils from the [PRIME](https://github.com/PRIME-RL/PRIME) repo. ## Evaluation For evaluation, we use the [script](https://github.com/QwenLM/Qwen2.5-Math/blob/main/evaluation/sh/eval.sh) from the [Qwen math eval suite](https://github.com/QwenLM/Qwen2.5-Math/tree/main/evaluation). We use vLLM version `0.6.2` for all evaluations. For AMC and AIME, we use temp=0.6, top_p=0.95 and n_sample=8. After the generation, we calculate the pass@1 using this [script](https://github.com/NovaSky-AI/SkyThought/tree/main/scripts/qwen_eval_bon.py). For MATH500 and OlympiadBench, we use greedy decoding. We use the [skythought system prompt](https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/evals/models/model_configs.yaml) when evaluating all the models trained by us except for Sky-T1-mini which is evaluated [without a system prompt](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B#usage-recommendations).
{ "source": "NovaSky-AI/SkyThought", "title": "recipes/sky-t1-7b/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/recipes/sky-t1-7b/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 2903 }
# Sky-T1-32B-Flash [Model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Flash) | [Dataset](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_preference_data_10k) | [Blog](https://novasky-ai.github.io/posts/reduce-overthinking/) For a detailed breakdown of the duration curation steps and training methodology, refer to the [blog](https://novasky-ai.github.io/posts/reduce-overthinking/) ## Setup Make sure you have installed the `skythought` package as outlined in the [README.md](/README.md#usage). All the data curation commands are provided from the root directory of the repo. ## Stage 1: Data Generation We used `Sky-T1-32B-Preview` to generate responses to the 12K questions in the `PRM800K` dataset. For each question, we used a temperature of 1.0 and generated 8 responses to create a diversity of response lengths. We then formed preference pairs to contrast “verbose” vs. “concise” solutions. Specifically, from the generated responses, we picked the shortest correct response as the positive example and the longest correct response as the negative example. We discarded the rest of the generated responses, and discard any questions that did not produce at least two correct responses. We also incorporated a small number of coding preference pairs simultaneously boosts coding accuracy and further reduces coding generation lengths. ## Stage 2: Response Rewriting The file `response_rewrite.py` provides a pipeline for filtering and rewriting responses generated with `inference_and_check.py`. We use `response_rewrite.py` to create preference pairs for preference optimization (e.g., DPO, SimPO), however, the logic can be edited for alternative filtering and rewriting steps. Details of the implemented logic can be found in `response_rewrite.py` or on [this blog post](https://novasky-ai.github.io/posts/reduce-overthinking). To use our preference optimization pipeline, first generate and score multiple responses using `inference_and_check.py`. For example: ```shell skythought evaluate --task math500 --model Qwen/Qwen2-7B-Instruct --backend vllm --backend-args tp=4 --sampling-params max_tokens=4096,temperature=0.7 --n 8 --result-dir ./ ``` This will save the results in a directory with the following structure: ``` ├── Qwen_Qwen2-7B-Instruct_math500_myHash │ ├── results.json │ └── summary.json ``` Then, use `response_rewrite.py` to process the responses into preference pairs. By default, the shortest correct responses will be used as positive examples and the longest correct responses will be used as negative samples. The argument `--SILC` can be used to also include short incorrect responses as negative examples and long correct repsonses as positive samples. ```shell python scripts/response_rewrite.py --SILC --rewrite-model meta-llama/Meta-Llama-3-8B-Instruct --target-model NovaSky-AI/Sky-T1-32B-Preview --dataset [PATH_TO_GENERATED_RESPONSES] --result-dir ./ --checkpoint --tp 8 ``` where `[PATH_TO_GENERATED_RESPONSES]` is the path to the `results.json` file. The `--checkpoint` argument can optionally be used to save intermediate files of the processed data between steps, in case of failure. The resulting `.json` files can be used to train a model with preference optimization algorithms. See the `/train/` directory for more details.
{ "source": "NovaSky-AI/SkyThought", "title": "recipes/sky-t1-flash/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/recipes/sky-t1-flash/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 3302 }
# Sky-T1-32B-Preview [Model](https://huggingface.co/NovaSky-AI/Sky-T1-32B-Preview) | [Dataset](https://huggingface.co/datasets/NovaSky-AI/Sky-T1_data_17k) | [Blog](https://novasky-ai.github.io/posts/sky-t1/) Give below are the instructions to replicate the data preprocessing and training steps for Sky-T1-32B-Preview. ## Setup Make sure you have installed the `skythought` package as outlined in the [README.md](/README.md#usage). All the data curation commands are provided from the root directory of the repo. Set the env variable `SKYT_HOME` as the directory for the final dataset. ## Training Data Curation To generate the training data for Sky-T1, we use the QwQ-32B-Preview model. We curate the data mixture to cover diverse domains that require reasoning, and a reject sampling procedure to improve the data quality. We also add the science and riddle portion from the [STILL-2 model](https://arxiv.org/pdf/2412.09413). The final data contains (1) 5k coding data from APPs and TACO, (2) 10k math data from AIME, MATH, and Olympiads subsets of the NuminaMATH dataset and (3) 1k science and puzzle data from STILL-2. ### Step 0 (Only for NUMINA math dataset): Label Math Difficulty from NUMINA We provide the labelled NUMINA dataset used for training here: https://huggingface.co/datasets/NovaSky-AI/labeled_numina_difficulty . For replication, read on below. Put one or multiple OPENAI_API_KEY in a file, e.g. keys.txt (one per line). If there is more than one key, the script will use them in a round-robin way to speed up generation. Label Math difficulty using GPT-4o-mini: #### Example usage: ``` python scripts/label_math_difficulty.py --source [amc_aime, math, olympiads] --keys keys.txt ``` The expected output is labeled_source_0_-1.json. We also provide instructions to download these files under the labeled_numina_difficulty folder (Download from HuggingFace). ### Step 1: Data Inference Inference the results from QwQ on several datasets. In preview version, we use data from the following dataset. ```shell skythought generate --task apps --model Qwen/QwQ-32B-Preview --backend vllm --backend-args tp=8 --sampling-params max_tokens=16384 --task-args dataset_split=test --result-dir $SKYT_HOME/data skythought generate --task taco --model Qwen/QwQ-32B-Preview --backend vllm --backend-args tp=8 --sampling-params max_tokens=16384 --task-args '{"dataset_split": "train", "preprocess_config": {"difficulty": "MEDIUM"}}' --result-dir $SKYT_HOME/data skythought generate --task numina --model Qwen/QwQ-32B-Preview --backend vllm --backend-args tp=8 --sampling-params max_tokens=16384 --task-args '{"dataset_split": "train", "preprocess_config": {"difficulty": "math"}}' --result-dir $SKYT_HOME/data skythought generate --task numina --model Qwen/QwQ-32B-Preview --backend vllm --backend-args tp=8 --sampling-params max_tokens=16384 --task-args '{"dataset_split": "train", "preprocess_config": {"difficulty": "amc_aime"}}' --result-dir $SKYT_HOME/data skythought generate --task numina --model Qwen/QwQ-32B-Preview --backend vllm --backend-args tp=8 --sampling-params max_tokens=16384 --task-args '{"dataset_split": "train", "preprocess_config": {"difficulty": "olympiads"}}' --result-dir $SKYT_HOME/data --start 0 --end 20000 ``` This will save the results in individual folders in `result-dir`. The directory structure should be as follows: ``` ├── Qwen_QwQ-32B-Preview_numina_myHash │ ├── results.json │ └── summary.json ├── Qwen_QwQ-32B-Preview_apps_myHash │ ├── results.json │ └── summary.json ... ``` ### Step 2: Format the response After obtaining a list file for training data, convert them to a unified format (Note: This uses GPT-4o-mini to rewrite. The output is long and takes ~100 dollars for our preview data). This will overwrite the result files "results.json" in the directory. ```shell python scripts/convert_format.py --input_dir $SKYT_HOME/data --keys keys.txt ``` ### Step 3: Reject Sampling on the formatted data (Example Usage with previous script) ```shell skythought score --task apps --path <path_to_run_folder> ``` Similar for other datasets. ### Convert to ShareGPT format for training After obtaining multiple converted files, merge them together and convert to the ShareGPT format to perform training. In our preview model, we also add the science and riddle portion from the [STILL-2 model](https://arxiv.org/pdf/2412.09413), where interested readers can download their part of data and simply concatenating to the data obtained above. ```shell python scripts/convert_to_data.py --input_dir $SKYT_HOME/data --output $SKYT_HOME/data/train_data.json ``` ## Training The model was trained for 3 epochs with a learning rate of 1e-5 and a batch size of 96 using [LlamaFactory](https://github.com/hiyouga/LLaMA-Factory). Our model training was completed in 19 hours on 8 H100 GPUs using DeepSpeed Zero-3 offloading, costing approximately $450 as per Lambda Cloud pricing.
{ "source": "NovaSky-AI/SkyThought", "title": "recipes/sky-t1-preview/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/recipes/sky-t1-preview/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 4952 }
# Skythought-evals: Data Generation and Evaluation Tools ## Requirements Make sure you have installed the `skythought` package as outlined in the [README.md](/README.md#usage). For running OpenAI model, export the OpenAI key. ```shell export OPENAI_API_KEY={openai_api_key} ``` ## Generation and Evaluation ### Benchmark Evaluation Given below are two examples for evaluation. For a walkthrough on the basics, please refer to the [example](../../examples/evaluate.ipynb). ```shell skythought evaluate --model NovaSky-AI/Sky-T1-32B-Preview --task aime --backend vllm --backend-args tensor_parallel_size=8 --sampling-params temperature=0.6,top_p=0.95 --n 8 --result-dir ./ skythought evaluate --model NovaSky-AI/Sky-T1-32B-Preview --task gpqa_diamond --backend vllm --backend-args tensor_parallel_size=8 --sampling-params temperature=0.6,top_p=0.95 --n 8 ``` **Note**: The `GPQADiamond` dataset is gated and requires first receiving access at this Huggingface [link](https://huggingface.co/datasets/Idavidrein/gpqa) (which is granted immediately), then logging into your Huggingface account in your terminal session with `huggingface-cli login`. The results will be saved in a folder in `result-dir`: ```bash result-dir/ ├── Qwen_QwQ-32B-Preview_aime_myHash │ ├── results.json # contains the full results for the benchmark │ └── summary.json # contains summary of the run with configuration and metrics ``` ### Scaling evaluation with Ray You can scale evaluations across multiple model replicas (and across multiple nodes) using [ray](https://docs.ray.io) backend: ```shell skythought evaluate --model Qwen/QwQ-32B-Preview --task aime --backend ray --backend-args tensor_parallel_size=4,num_replicas=4 --result-dir ./ ``` By default, we make use of the configuration in [ray_configs/ray_config.yaml](./ray_configs/ray_config.yaml). You can also customize the following parameters for ray: ### Optimized settings for 32B and 7B models The following are optimized settings on a 8xH100 or a 8xA100 node. We recommend using `ray` backend for best performance. For 32B models, we recommend using the default backend configuration for best performance. ```shell skythought evaluate --model Qwen/QwQ-32B-Preview --task aime24 --backend ray --result-dir ./ ``` For 7B models, we recommend using `tensor_parallel_size=1` and `num_replicas=8` for best performance. FOr example, the previous command will change to: ```shell skythought evaluate --model Qwen/Qwen2-7B-Instruct --task math500 --backend ray --backend-args tensor_parallel_size=1,num_replicas=8 --result-dir ./ ``` #### Multi-node inference Note that if you have a ray cluster setup, you can scale the number of replicas as needed with `num_replicas` argument in `backend-args` to make full use of your cluster. Make sure to execute the script on the head node and ensure that `--result-dir` is a valid directory that the head node can write to. ### Best-of-N Evaluation You can use the `--n` parameter to specify the number of generations per problem. For `n>1` , we calculate pass ```bash skythought evaluate --model Qwen/Qwen2-7B-Instruct --task math500 --backend ray --backend-args tensor_parallel_size=1,num_replicas=8 --sampling-params temperature=0.7,max_tokens=4096 --n 64 --result-dir ./ ``` ### Distill and Reject Sampling Currently we support distill and reject sampling for NUMINA, APPS, and TACO datasets. For NUMINA, the source can be one from `[amc_aime, math, olympiads]`. #### Example Usage ```shell skythought generate --model Qwen/QwQ-32B-Preview --task apps --backend ray --backend-args tensor_parallel_size=8 --sampling-params max_tokens=16384 --result-dir $SKYT_HOME/data ``` ### Reproducibility Issues We've noticed that it can be hard to reproduce results in reasoning benchmarks. Beyond the lack of agreed sampling parameters and metrics in the field at the moment, there can be significant differences in results across different evaluation codebases, and even for the same codebase with a different set of dependencies. In half-precision (bfloat16 or float16), numerical error accumulation will change outputs ever so slightly, which can dramatically alter final performance. There are three factors we've noticed that affect results: - Long context generations: Errors can accumulate so that the output changes at 1k+ tokens, which compound as you keep generating. Since we typically set max tokens to be 16k or 32k tokens, the final solution will change significantly - vLLM settings: With vLLM, we’ve also noticed that at half-precision, different batch sizes can affect downstream evaluation results by a few percentage points. Further, different tensor parallelism settings can also change results in half-precision. - vLLM version: Different versions of vLLM will use different CUDA-Toolkit or Flash attention versions. Even for the same settings, these differences in the underlying kernels used can change results. We recommend to run all evaluation benchmarks at full precision, i.e float32 to avoid this. By default, we run evaluation in `float32`, which can be customized with the `--backend-args` flag for local inference. In full-precision, evaluation results should be robust to changes in batch size, tensor parallel size, version differences, etc.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/evals/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/evals/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 5294 }
# Reproducing results on non-reasoning benchmarks For the full set of results, see [here](./README.md#results-on-qa-and-instruction-following-benchmarks). ## Installation instructions 1. For `lm_eval`, install the package by executing the following : ```bash git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness git checkout 703fbff pip install -e ".[ifeval]" ``` For more details, you can refer to the official instructions [here](https://github.com/EleutherAI/lm-evaluation-harness/tree/703fbffd6fe5e136bbb9d884cb40844e5503ae5d?tab=readme-ov-file#install). We report results with commit https://github.com/EleutherAI/lm-evaluation-harness/commit/703fbffd6fe5e136bbb9d884cb40844e5503ae5d 2. For `fastchat`, follow the instructions [here](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge#install). The current implementation of Fastchat is based on OpenAI version <= 0.28.0. For making use of the latest vllm backend, it is recommended to migrate the `llm_judge` folder to use openai>=1.0.0. You can run `openai migrate` for the fastchat codebase or follow the PR [here](https://github.com/lm-sys/FastChat/pull/2915/files) 3. For `BFCL`, you can follow the official instructions [here](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#basic-installation). We further evaulate on all test categories, which requires [setting up environment variables](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#setting-up-environment-variables), and [obtaining API keys for executable test categories](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#api-keys-for-executable-test-categories). Make sure to use changes from [this PR](https://github.com/ShishirPatil/gorilla/pull/888) for QwQ and Sky-T1 model support. 4. For `Arena-Hard` results, you can follow the instructions [here](https://github.com/lmarena/arena-hard-auto). We use `gpt-4-1106-preview` as the judge. ## Commands for reproducing results All the benchmarks were run on a 8xH100 machine with the `vllm` backend. If you're running on a different device, make sure to tweak `tensor_parallel_size` and if needed the `batch_size` arguments. Expect some variance in scores (+/- 1%) for different evaluation settings (ex: `tensor_parallel_size`) All the commands below are given for `NovaSky-AI/Sky-T1-32B-Preview`. Simply substitute the model name for `Qwen/Qwen-2.5-32B-Instruct`. For `Qwen/QwQ-32B-Preview`, we further make use of two arguments `revision=refs/pr/58,tokenizer_revision=refs/pr/58` to use a corrected revision of QwQ. For more details on this, see https://github.com/NovaSky-AI/SkyThought/pull/26#issuecomment-2606435601. ### MMLU (0 shot; no CoT) ```bash lm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mmlu --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn ``` For QwQ, you would do ```bash lm_eval --model vllm --model_args pretrained=Qwen/QwQ-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048revision=refs/pr/58,tokenizer_revision=refs/pr/58 --tasks mmlu --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn ``` ### MMLU (5 shot; no CoT) ```bash lm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mmlu --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn --num_fewshot 5 ``` ### ARC-C (0 shot; no CoT) ```bash lm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks arc_challenge --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn ``` ### IFEval ```bash lm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.9,data_parallel_size=1 --tasks leaderboard_ifeval --trust_remote_code --batch_size auto --apply_chat_template --fewshot_as_multiturn ``` We use the `prompt_level_strict_acc` metric following Qwen-2.5. ### MGSM (native CoT) ```bash lm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mgsm_direct --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn ``` We report the average value of `flexible-extract` filter. ### MGSM (8-shot; native CoT) ```bash lm_eval --model vllm --model_args pretrained=NovaSky-AI/Sky-T1-32B-Preview,tensor_parallel_size=8,dtype=auto,gpu_memory_utilization=0.8,data_parallel_size=1,max_model_len=2048 --tasks mgsm_direct --trust_remote_code --batch_size 8 --apply_chat_template --fewshot_as_multiturn --num_fewshot 8 ``` ### LLM-as-a-Judge We use the default settings - with `max_tokens` 1024 and the `gpt-4` judge. We observe that some reasoning models like `Qwen/QwQ-32B-Preview` are unable to provide brief responses sometimes and thus get truncated responses at the used `max_tokens`. While this will effect the final rating, given the context length limitations of the commonly used `gpt-4` judge (8K tokens), we stick to the 1024 `max_tokens` budget for consistency. 1. First, serve the model with vLLM ```bash vllm serve NovaSky-AI/Sky-T1-32B-Preview --dtype auto --tensor-parallel-size 8 --gpu-memory-utilization 0.9 ``` For `Qwen/QwQ-32B-Preview`, use ```bash vllm serve Qwen/QwQ-32B-Preview --dtype auto --tensor-parallel-size 8 --gpu-memory-utilization 0.9 --revision refs/pr/58 --tokenizer-revision refs/pr/58 ``` 2. Next, generate model response ```bash python gen_api_answer.py --model NovaSky-AI/Sky-T1-32B-Preview --openai-api-base http://localhost:8000/v1 --parallel 50 ``` Note: The generated results will be in `data/model_answer/<repo_id>/<model name>.jsonl` . Move them to the root folder `data/model_answer/` 3. After generating responses for all the models, evaluate with the default settings ```bash export OPENAI_API_KEY=XXXXXX # set the OpenAI API key python gen_judgment.py --model-list Sky-T1-32B-Preview QwQ-32B-Preview Qwen2.5-32B-Instruct --parallel 2 ``` 4. Get MTBench scores (we use the average score of both turns) ```bash python show_result.py ``` ### BFCL-v3 Our results are reported on `test-category` `all` . Make sure to get the API keys for the executable test categories by following the instructions [here](https://github.com/ShishirPatil/gorilla/tree/main/berkeley-function-call-leaderboard#api-keys-for-executable-test-categories) Run ```bash bfcl generate --model NovaSky-AI/Sky-T1-32B-Preview --test-category all --backend vllm --num-gpus 8 --gpu-memory-utilization 0.9 ``` For evaluation, you can simply run ```bash bfcl evaluate --model Qwen/QwQ-32B-Preview,NovaSky-AI/Sky-T1-32B-Preview,Qwen/Qwen2.5-32B-Instruct --test-category all --api-sanity-check ``` ### Arena Hard For `Arena-Hard`, we use the following script to start a `TGI` service for generating answers ```bash hf_pat= model=NovaSky-AI/Sky-T1-32B-Preview volume=/mnt/local_storage/data/cache port=1996 huggingface-cli download $model sudo docker run --gpus 8 -e HUGGING_FACE_HUB_TOKEN=$hf_pat --shm-size 2000g -p $port:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:2.2.0 --model-id $model --max-input-length 8192 --max-batch-total-tokens 8193 --max-batch-prefill-tokens 8193 --max-total-tokens 8193 --sharded true ``` For running the `gen_answer.py` script, we use the following `config_api` yaml setting. For `qwq-32b-preview`, we explicitly specify the system prompt as `You are a helpful and harmless assistant. You are Qwen developed by Alibaba.` to avoid the CoT prompt. ```yaml ... sky-T1-32B-Preview: model_name: sky-T1-32B-Preview endpoints: - api_base: http://localhost:1996/v1 api_key: empty api_type: openai parallel: 8 ... ``` and finally for `gen_judgment.py`, we use `gpt-4-1106-preview` as the judge. #### Supplementary results for Arena-Hard Here are some supplementary results for Arena-Hard, compared with o1-mini which is the best performing model on this benchmark (as of Jan 2025). | model | score | rating_q025 | rating_q975 | CI | avg_tokens | date | |-------|--------|------------|-------------|-------|------------|-------| | o1-mini-2024-09-12 | 91.98 | 90.88 | 93.12 | (-1.10, +1.14) | 1399.0 | 2025-01-18 | | sky-T1-32B-Preview | 74.79 | 72.28 | 76.8 | (-2.51, +2.01) | 847.0 | 2025-01-18 | | qwen2.5-32b-instruct | 66.51 | 64.55 | 68.4 | (-1.96, +1.89) | 611.0 | 2025-01-18 | | qwq-32b-preview | 52.6 | 50.86 | 54.91 | (-1.74, +2.31) | 1005.0 | 2025-01-23 | For more details, see: https://github.com/NovaSky-AI/SkyThought/pull/26#issuecomment-2599525551
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/evals/base_instruct_evals.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/evals/base_instruct_evals.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 9166 }
### Install veRL: 1. Create a conda environment: ```bash conda create -n verl python==3.9 conda activate verl pip install -r requirements.txt ``` 2. Install common dependencies (required for all backends) ```bash pip3 install vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1 pip3 install ray # flash attention 2 pip3 install flash-attn --no-build-isolation ``` 3. Install veRL ```bash pip3 install -e . ``` ### Prepare the data `python data/data_prepare_*.py --output {corresponding path}` ### Launch the training ```bash cd examples/sky-t1 bash ./run-sky-t1-7b-zero.sh ``` ### Acknowledgement This repo is modified on top of [VeRL](https://github.com/volcengine/verl) and [PRIME](https://github.com/PRIME-RL/PRIME).
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 733 }
# S*: Test Time Scaling for Code Generation #### This folder provides the code for the paper "S*: Test Time Scaling for Code Generation". ![Overview of S* approach](assets/figure1.png) ## Installation (Main packages) ```dspy=2.6.2, torch, vllm``` ## Usage The scripts to reproduce the results in the paper are in the `scripts` folder. - baselines are in `baselines`, `baselines_selfdebug`, `majority_baselines`. - experiments on dev set are in: `sec[4,5,6]`. - experiments on final test set are in: `final_[]`. First run commands under `final_oracle` to produce all generations without different selection methods, then run commands under `final_[]_cached` to produce generations with different selection methods. Results are availeble in google cloud storage ([Link](https://drive.google.com/drive/u/1/folders/1kmCoJ7Mkvj-umpkfsA5960hYpNrgH4X4)). Simple run commands to produce generations with oracle selection and 3 rounds of generation for gpt-4o-mini. Set OPENAI_API_KEY in your environment variable with `export OPENAI_API_KEY=xxx`. ``` python evaluate_multiprocess.py \ --difficulty=easy \ --temperature=0.7 \ --num_threads=32 \ --n=16 \ --selection oracle_all_rounds \ --lcb_version release_v2 \ --num_round 3 \ --result_json_path="results/final_4omini_n_16_debug_public3_select_oracle_easy.json" ``` To run experiments with local serve models, use ```vllm serve model_name``` to serve the model first. #### Citation ``` @article{li2025sstar, title={S*: Test Time Scaling for Code Generation}, author={Li, Dacheng and Cao, Shiyi and Cao, Chengkun and Li, Xiuyu and Tan, Shangyin and Keutzer, Kurt and Xing, Jiarong and Gonzalez, Joseph E. and Stoica, Ion}, year={2025} } ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/test-time-scaling/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/test-time-scaling/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 1731 }
## Training We use a fork from [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory) to perform training. Step 1: Please add the data path produced by the tools directory or the one we provide, to the file_name field of Sky-T1 entry in [LLaMA-Factory/data/dataset_info.json](./LLaMA-Factory/data/dataset_info.json). Step 2: run `FORCE_TORCHRUN=1 NNODES=1 NODE_RANK=0 MASTER_PORT=29501 llamafactory-cli train examples/train_full/qwen2_full_sft.yaml` to train from a 32B model on 8 H100 GPUs. Interested readers can refer to the detailed settings in [examples/train_full/qwen2_full_sft.yaml](./LLaMA-Factory/examples/train_full/qwen2_full_sft.yaml).
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 657 }
# Labeled NUMINA Difficulty Data We also include data of labeled difficulty from NUMINA, in the following files: `labeled_amc_aime_0_-1.json`, `labeled_math_0_-1.json`, `labeled_olympiads_0_-1.json`. These files can be found and downloaded from [HuggingFace](https://huggingface.co/datasets/NovaSky-AI/labeled_numina_difficulty).
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/evals/labeled_numina_difficulty/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/evals/labeled_numina_difficulty/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 331 }
# veRL documents ## Build the docs ```bash # Install dependencies. pip install -r requirements-docs.txt # Build the docs. make clean make html ``` ## Open the docs with your browser ```bash python -m http.server -d _build/html/ ``` Launch your browser and open localhost:8000.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 281 }
Welcome to veRL's documentation! ================================================ .. _hf_arxiv: https://arxiv.org/pdf/2409.19256 veRL is a flexible, efficient and production-ready RL training framework designed for large language models (LLMs) post-training. It is an open source implementation of the `HybridFlow <hf_arxiv>`_ paper. veRL is flexible and easy to use with: - **Easy extension of diverse RL algorithms**: The Hybrid programming model combines the strengths of single-controller and multi-controller paradigms to enable flexible representation and efficient execution of complex Post-Training dataflows. Allowing users to build RL dataflows in a few lines of code. - **Seamless integration of existing LLM infra with modular APIs**: Decouples computation and data dependencies, enabling seamless integration with existing LLM frameworks, such as PyTorch FSDP, Megatron-LM and vLLM. Moreover, users can easily extend to other LLM training and inference frameworks. - **Flexible device mapping and parallelism**: Supports various placement of models onto different sets of GPUs for efficient resource utilization and scalability across different cluster sizes. - Readily integration with popular HuggingFace models veRL is fast with: - **State-of-the-art throughput**: By seamlessly integrating existing SOTA LLM training and inference frameworks, veRL achieves high generation and training throughput. - **Efficient actor model resharding with 3D-HybridEngine**: Eliminates memory redundancy and significantly reduces communication overhead during transitions between training and generation phases. -------------------------------------------- .. _Contents: .. toctree:: :maxdepth: 5 :caption: Quickstart :titlesonly: :numbered: start/install start/quickstart .. toctree:: :maxdepth: 5 :caption: Data Preparation :titlesonly: :numbered: preparation/prepare_data preparation/reward_function .. toctree:: :maxdepth: 2 :caption: PPO Example :titlesonly: :numbered: examples/ppo_code_architecture examples/config examples/gsm8k_example .. toctree:: :maxdepth: 1 :caption: PPO Trainer and Workers workers/ray_trainer workers/fsdp_workers workers/megatron_workers .. toctree:: :maxdepth: 1 :caption: Experimental Results experiment/ppo .. toctree:: :maxdepth: 1 :caption: Advance Usage and Extension advance/placement advance/dpo_extension advance/fsdp_extension advance/megatron_extension Contribution ------------- veRL is free software; you can redistribute it and/or modify it under the terms of the Apache License 2.0. We welcome contributions. Join us on `GitHub <https://github.com/volcengine/verl>`_ and `Slack <https://join.slack.com/t/verlgroup/shared_invite/zt-2w5p9o4c3-yy0x2Q56s_VlGLsJ93A6vA>`_ for discussions. Code formatting ^^^^^^^^^^^^^^^^^^^^^^^^ We use yapf (Google style) to enforce strict code formatting when reviewing MRs. Run yapf at the top level of verl repo: .. code-block:: bash pip3 install yapf yapf -ir -vv --style ./.style.yapf verl examples tests
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/index.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/index.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 3125 }
![# LLaMA Factory](assets/logo.png) [![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers) [![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE) [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) [![Citation](https://img.shields.io/badge/citation-93-green)](#projects-using-llama-factory) [![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) [![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK) [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing) [![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) [![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board) [![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board) [![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) [![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535) 👋 Join our [WeChat](assets/wechat.jpg) or [NPU user group](assets/wechat_npu.jpg). \[ English | [中文](README_zh.md) \] **Fine-tuning a large language model can be easy as...** https://github.com/user-attachments/assets/7c96b465-9df7-45f4-8053-bf03e58386d3 Choose your path: - **Documentation (WIP)**: https://llamafactory.readthedocs.io/zh-cn/latest/ - **Colab**: https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing - **Local machine**: Please refer to [usage](#getting-started) - **PAI-DSW**: [Llama3 Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL Example](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) - **Amazon SageMaker**: [Blog](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) Recent activities: - **2024/10/18-2024/11/30**: Build a personal tour guide bot using PAI+LLaMA Factory. [[website]](https://developer.aliyun.com/topic/llamafactory2) > [!NOTE] > Except for the above links, all other websites are unauthorized third-party websites. Please carefully use them. ## Table of Contents - [Features](#features) - [Benchmark](#benchmark) - [Changelog](#changelog) - [Supported Models](#supported-models) - [Supported Training Approaches](#supported-training-approaches) - [Provided Datasets](#provided-datasets) - [Requirement](#requirement) - [Getting Started](#getting-started) - [Projects using LLaMA Factory](#projects-using-llama-factory) - [License](#license) - [Citation](#citation) - [Acknowledgement](#acknowledgement) ## Features - **Various models**: LLaMA, LLaVA, Mistral, Mixtral-MoE, Qwen, Qwen2-VL, Yi, Gemma, Baichuan, ChatGLM, Phi, etc. - **Integrated methods**: (Continuous) pre-training, (multimodal) supervised fine-tuning, reward modeling, PPO, DPO, KTO, ORPO, etc. - **Scalable resources**: 16-bit full-tuning, freeze-tuning, LoRA and 2/3/4/5/6/8-bit QLoRA via AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ. - **Advanced algorithms**: [GaLore](https://github.com/jiaweizzhao/GaLore), [BAdam](https://github.com/Ledzy/BAdam), [Adam-mini](https://github.com/zyushun/Adam-mini), DoRA, LongLoRA, LLaMA Pro, Mixture-of-Depths, LoRA+, LoftQ, PiSSA and Agent tuning. - **Practical tricks**: [FlashAttention-2](https://github.com/Dao-AILab/flash-attention), [Unsloth](https://github.com/unslothai/unsloth), [Liger Kernel](https://github.com/linkedin/Liger-Kernel), RoPE scaling, NEFTune and rsLoRA. - **Experiment monitors**: LlamaBoard, TensorBoard, Wandb, MLflow, etc. - **Faster inference**: OpenAI-style API, Gradio UI and CLI with vLLM worker. ## Benchmark Compared to ChatGLM's [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning), LLaMA Factory's LoRA tuning offers up to **3.7 times faster** training speed with a better Rouge score on the advertising text generation task. By leveraging 4-bit quantization technique, LLaMA Factory's QLoRA further improves the efficiency regarding the GPU memory. ![benchmark](assets/benchmark.svg) <details><summary>Definitions</summary> - **Training Speed**: the number of training samples processed per second during the training. (bs=4, cutoff_len=1024) - **Rouge Score**: Rouge-2 score on the development set of the [advertising text generation](https://aclanthology.org/D19-1321.pdf) task. (bs=4, cutoff_len=1024) - **GPU Memory**: Peak GPU memory usage in 4-bit quantized training. (bs=1, cutoff_len=1024) - We adopt `pre_seq_len=128` for ChatGLM's P-Tuning and `lora_rank=32` for LLaMA Factory's LoRA tuning. </details> ## Changelog [24/11/27] We supported fine-tuning the **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** model and the **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** dataset. [24/10/09] We supported downloading pre-trained models and datasets from the **[Modelers Hub](https://modelers.cn/models)**. See [this tutorial](#download-from-modelers-hub) for usage. [24/09/19] We supported fine-tuning the **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** models. [24/08/30] We supported fine-tuning the **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** models. Thank [@simonJJJ](https://github.com/simonJJJ)'s PR. <details><summary>Full Changelog</summary> [24/08/27] We supported **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**. Try `enable_liger_kernel: true` for efficient training. [24/08/09] We supported **[Adam-mini](https://github.com/zyushun/Adam-mini)** optimizer. See [examples](examples/README.md) for usage. Thank [@relic-yuexi](https://github.com/relic-yuexi)'s PR. [24/07/04] We supported [contamination-free packed training](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing). Use `neat_packing: true` to activate it. Thank [@chuan298](https://github.com/chuan298)'s PR. [24/06/16] We supported **[PiSSA](https://arxiv.org/abs/2404.02948)** algorithm. See [examples](examples/README.md) for usage. [24/06/07] We supported fine-tuning the **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** and **[GLM-4](https://github.com/THUDM/GLM-4)** models. [24/05/26] We supported **[SimPO](https://arxiv.org/abs/2405.14734)** algorithm for preference learning. See [examples](examples/README.md) for usage. [24/05/20] We supported fine-tuning the **PaliGemma** series models. Note that the PaliGemma models are pre-trained models, you need to fine-tune them with `paligemma` template for chat completion. [24/05/18] We supported **[KTO](https://arxiv.org/abs/2402.01306)** algorithm for preference learning. See [examples](examples/README.md) for usage. [24/05/14] We supported training and inference on the Ascend NPU devices. Check [installation](#installation) section for details. [24/04/26] We supported fine-tuning the **LLaVA-1.5** multimodal LLMs. See [examples](examples/README.md) for usage. [24/04/22] We provided a **[Colab notebook](https://colab.research.google.com/drive/1eRTPn37ltBbYsISy9Aw2NuI2Aq5CQrD9?usp=sharing)** for fine-tuning the Llama-3 model on a free T4 GPU. Two Llama-3-derived models fine-tuned using LLaMA Factory are available at Hugging Face, check [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese) for details. [24/04/21] We supported **[Mixture-of-Depths](https://arxiv.org/abs/2404.02258)** according to [AstraMindAI's implementation](https://github.com/astramind-ai/Mixture-of-depths). See [examples](examples/README.md) for usage. [24/04/16] We supported **[BAdam](https://arxiv.org/abs/2404.02827)** optimizer. See [examples](examples/README.md) for usage. [24/04/16] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s long-sequence training (Llama-2-7B-56k within 24GB). It achieves **117%** speed and **50%** memory compared with FlashAttention-2, more benchmarks can be found in [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison). [24/03/31] We supported **[ORPO](https://arxiv.org/abs/2403.07691)**. See [examples](examples/README.md) for usage. [24/03/21] Our paper "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" is available at arXiv! [24/03/20] We supported **FSDP+QLoRA** that fine-tunes a 70B model on 2x24GB GPUs. See [examples](examples/README.md) for usage. [24/03/13] We supported **[LoRA+](https://arxiv.org/abs/2402.12354)**. See [examples](examples/README.md) for usage. [24/03/07] We supported **[GaLore](https://arxiv.org/abs/2403.03507)** optimizer. See [examples](examples/README.md) for usage. [24/03/07] We integrated **[vLLM](https://github.com/vllm-project/vllm)** for faster and concurrent inference. Try `infer_backend: vllm` to enjoy **270%** inference speed. [24/02/28] We supported weight-decomposed LoRA (**[DoRA](https://arxiv.org/abs/2402.09353)**). Try `use_dora: true` to activate DoRA training. [24/02/15] We supported **block expansion** proposed by [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro). See [examples](examples/README.md) for usage. [24/02/05] Qwen1.5 (Qwen2 beta version) series models are supported in LLaMA-Factory. Check this [blog post](https://qwenlm.github.io/blog/qwen1.5/) for details. [24/01/18] We supported **agent tuning** for most models, equipping model with tool using abilities by fine-tuning with `dataset: glaive_toolcall_en`. [23/12/23] We supported **[unsloth](https://github.com/unslothai/unsloth)**'s implementation to boost LoRA tuning for the LLaMA, Mistral and Yi models. Try `use_unsloth: true` argument to activate unsloth patch. It achieves **170%** speed in our benchmark, check [this page](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison) for details. [23/12/12] We supported fine-tuning the latest MoE model **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)** in our framework. See hardware requirement [here](#hardware-requirement). [23/12/01] We supported downloading pre-trained models and datasets from the **[ModelScope Hub](https://modelscope.cn/models)**. See [this tutorial](#download-from-modelscope-hub) for usage. [23/10/21] We supported **[NEFTune](https://arxiv.org/abs/2310.05914)** trick for fine-tuning. Try `neftune_noise_alpha: 5` argument to activate NEFTune. [23/09/27] We supported **$S^2$-Attn** proposed by [LongLoRA](https://github.com/dvlab-research/LongLoRA) for the LLaMA models. Try `shift_attn: true` argument to enable shift short attention. [23/09/23] We integrated MMLU, C-Eval and CMMLU benchmarks in this repo. See [examples](examples/README.md) for usage. [23/09/10] We supported **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**. Try `flash_attn: fa2` argument to enable FlashAttention-2 if you are using RTX4090, A100 or H100 GPUs. [23/08/12] We supported **RoPE scaling** to extend the context length of the LLaMA models. Try `rope_scaling: linear` argument in training and `rope_scaling: dynamic` argument at inference to extrapolate the position embeddings. [23/08/11] We supported **[DPO training](https://arxiv.org/abs/2305.18290)** for instruction-tuned models. See [examples](examples/README.md) for usage. [23/07/31] We supported **dataset streaming**. Try `streaming: true` and `max_steps: 10000` arguments to load your dataset in streaming mode. [23/07/29] We released two instruction-tuned 13B models at Hugging Face. See these Hugging Face Repos ([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft)) for details. [23/07/18] We developed an **all-in-one Web UI** for training, evaluation and inference. Try `train_web.py` to fine-tune models in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development. [23/07/09] We released **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested. [23/06/29] We provided a **reproducible example** of training a chat model using instruction-following datasets, see [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft) for details. [23/06/22] We aligned the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in **arbitrary ChatGPT-based applications**. [23/06/03] We supported quantized training and inference (aka **[QLoRA](https://github.com/artidoro/qlora)**). See [examples](examples/README.md) for usage. </details> ## Supported Models | Model | Model size | Template | | ----------------------------------------------------------------- | -------------------------------- | ---------------- | | [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | | [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | | [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | | [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | | [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | | [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | | [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | | [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | | [Index](https://huggingface.co/IndexTeam) | 1.9B | index | | [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | | [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | | [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | | [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | | [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | | [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | | [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | | [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | | [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | | [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | | [OLMo](https://huggingface.co/allenai) | 1B/7B | - | | [PaliGemma](https://huggingface.co/google) | 3B | paligemma | | [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | | [Phi-3](https://huggingface.co/microsoft) | 4B/14B | phi | | [Phi-3-small](https://huggingface.co/microsoft) | 7B | phi_small | | [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | | [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | | [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | | [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 | | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | | [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | | [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | | [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan | > [!NOTE] > For the "base" models, the `template` argument can be chosen from `default`, `alpaca`, `vicuna` etc. But make sure to use the **corresponding template** for the "instruct/chat" models. > > Remember to use the **SAME** template in training and inference. Please refer to [constants.py](src/llamafactory/extras/constants.py) for a full list of models we supported. You also can add a custom chat template to [template.py](src/llamafactory/data/template.py). ## Supported Training Approaches | Approach | Full-tuning | Freeze-tuning | LoRA | QLoRA | | ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | Pre-Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Supervised Fine-Tuning | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | Reward Modeling | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | PPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | DPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | KTO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | ORPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | SimPO Training | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | > [!TIP] > The implementation details of PPO can be found in [this blog](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html). ## Provided Datasets <details><summary>Pre-training datasets</summary> - [Wiki Demo (en)](data/wiki_demo.txt) - [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2) - [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220) - [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered) - [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile) - [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B) - [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb) - [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) - [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack) - [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata) </details> <details><summary>Supervised fine-tuning datasets</summary> - [Identity (en&zh)](data/identity.json) - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca) - [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3) - [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) - [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima) - [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) - [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN) - [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN) - [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) - [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M) - [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M) - [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) - [UltraChat (en)](https://github.com/thunlp/UltraChat) - [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) - [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) - [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca) - [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca) - [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) - [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) - [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa) - [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa) - [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn) - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) - [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data) - [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen) - [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k) - [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) - [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) - [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct) - [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) - [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) - [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) - [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction) - [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo) - [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2) - [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) - [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) - [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) - [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT) - [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k) - [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions) - [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de) - [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de) - [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de) - [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de) - [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de) - [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de) - [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de) - [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de) - [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de) </details> <details><summary>Preference datasets</summary> - [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k) - [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) - [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset) - [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback) - [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs) - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf) - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) - [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de) - [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k) </details> Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands. ```bash pip install --upgrade huggingface_hub huggingface-cli login ``` ## Requirement | Mandatory | Minimum | Recommend | | ------------ | ------- | --------- | | python | 3.8 | 3.11 | | torch | 1.13.1 | 2.4.0 | | transformers | 4.41.2 | 4.43.4 | | datasets | 2.16.0 | 2.20.0 | | accelerate | 0.30.1 | 0.32.0 | | peft | 0.11.1 | 0.12.0 | | trl | 0.8.6 | 0.9.6 | | Optional | Minimum | Recommend | | ------------ | ------- | --------- | | CUDA | 11.6 | 12.2 | | deepspeed | 0.10.0 | 0.14.0 | | bitsandbytes | 0.39.0 | 0.43.1 | | vllm | 0.4.3 | 0.5.0 | | flash-attn | 2.3.0 | 2.6.3 | ### Hardware Requirement \* *estimated* | Method | Bits | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | | ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ | | Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB | | Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | | Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | | LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB | ## Getting Started ### Installation > [!IMPORTANT] > Installation is mandatory. ```bash git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git cd LLaMA-Factory pip install -e ".[torch,metrics]" ``` Extra dependencies available: torch, torch-npu, metrics, deepspeed, liger-kernel, bitsandbytes, hqq, eetq, gptq, awq, aqlm, vllm, galore, badam, adam-mini, qwen, modelscope, openmind, quality > [!TIP] > Use `pip install --no-deps -e .` to resolve package conflicts. <details><summary>For Windows users</summary> If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you need to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.2, please select the appropriate [release version](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels) based on your CUDA version. ```bash pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl ``` To enable FlashAttention-2 on the Windows platform, you need to install the precompiled `flash-attn` library, which supports CUDA 12.1 to 12.2. Please download the corresponding version from [flash-attention](https://github.com/bdashore3/flash-attention/releases) based on your requirements. </details> <details><summary>For Ascend NPU users</summary> To install LLaMA Factory on Ascend NPU devices, please specify extra dependencies: `pip install -e ".[torch-npu,metrics]"`. Additionally, you need to install the **[Ascend CANN Toolkit and Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**. Please follow the [installation tutorial](https://www.hiascend.com/document/detail/en/CANNCommunityEdition/600alphaX/softwareinstall/instg/atlasdeploy_03_0031.html) or use the following commands: ```bash # replace the url according to your CANN version and devices # install CANN Toolkit wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install # install CANN Kernels wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install # set env variables source /usr/local/Ascend/ascend-toolkit/set_env.sh ``` | Requirement | Minimum | Recommend | | ------------ | ------- | ----------- | | CANN | 8.0.RC1 | 8.0.RC1 | | torch | 2.1.0 | 2.1.0 | | torch-npu | 2.1.0 | 2.1.0.post3 | | deepspeed | 0.13.2 | 0.13.2 | Remember to use `ASCEND_RT_VISIBLE_DEVICES` instead of `CUDA_VISIBLE_DEVICES` to specify the device to use. If you cannot infer model on NPU devices, try setting `do_sample: false` in the configurations. Download the pre-built Docker images: [32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html) </details> ### Data Preparation Please refer to [data/README.md](data/README.md) for checking the details about the format of dataset files. You can either use datasets on HuggingFace / ModelScope / Modelers hub or load the dataset in local disk. > [!NOTE] > Please update `data/dataset_info.json` to use your custom dataset. ### Quickstart Use the following 3 commands to run LoRA **fine-tuning**, **inference** and **merging** of the Llama3-8B-Instruct model, respectively. ```bash llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml llamafactory-cli chat examples/inference/llama3_lora_sft.yaml llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml ``` See [examples/README.md](examples/README.md) for advanced usage (including distributed training). > [!TIP] > Use `llamafactory-cli help` to show help information. ### Fine-Tuning with LLaMA Board GUI (powered by [Gradio](https://github.com/gradio-app/gradio)) ```bash llamafactory-cli webui ``` ### Build Docker For CUDA users: ```bash cd docker/docker-cuda/ docker compose up -d docker compose exec llamafactory bash ``` For Ascend NPU users: ```bash cd docker/docker-npu/ docker compose up -d docker compose exec llamafactory bash ``` For AMD ROCm users: ```bash cd docker/docker-rocm/ docker compose up -d docker compose exec llamafactory bash ``` <details><summary>Build without Docker Compose</summary> For CUDA users: ```bash docker build -f ./docker/docker-cuda/Dockerfile \ --build-arg INSTALL_BNB=false \ --build-arg INSTALL_VLLM=false \ --build-arg INSTALL_DEEPSPEED=false \ --build-arg INSTALL_FLASHATTN=false \ --build-arg PIP_INDEX=https://pypi.org/simple \ -t llamafactory:latest . docker run -dit --gpus=all \ -v ./hf_cache:/root/.cache/huggingface \ -v ./ms_cache:/root/.cache/modelscope \ -v ./om_cache:/root/.cache/openmind \ -v ./data:/app/data \ -v ./output:/app/output \ -p 7860:7860 \ -p 8000:8000 \ --shm-size 16G \ --name llamafactory \ llamafactory:latest docker exec -it llamafactory bash ``` For Ascend NPU users: ```bash # Choose docker image upon your environment docker build -f ./docker/docker-npu/Dockerfile \ --build-arg INSTALL_DEEPSPEED=false \ --build-arg PIP_INDEX=https://pypi.org/simple \ -t llamafactory:latest . # Change `device` upon your resources docker run -dit \ -v ./hf_cache:/root/.cache/huggingface \ -v ./ms_cache:/root/.cache/modelscope \ -v ./om_cache:/root/.cache/openmind \ -v ./data:/app/data \ -v ./output:/app/output \ -v /usr/local/dcmi:/usr/local/dcmi \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ -v /etc/ascend_install.info:/etc/ascend_install.info \ -p 7860:7860 \ -p 8000:8000 \ --device /dev/davinci0 \ --device /dev/davinci_manager \ --device /dev/devmm_svm \ --device /dev/hisi_hdc \ --shm-size 16G \ --name llamafactory \ llamafactory:latest docker exec -it llamafactory bash ``` For AMD ROCm users: ```bash docker build -f ./docker/docker-rocm/Dockerfile \ --build-arg INSTALL_BNB=false \ --build-arg INSTALL_VLLM=false \ --build-arg INSTALL_DEEPSPEED=false \ --build-arg INSTALL_FLASHATTN=false \ --build-arg PIP_INDEX=https://pypi.org/simple \ -t llamafactory:latest . docker run -dit \ -v ./hf_cache:/root/.cache/huggingface \ -v ./ms_cache:/root/.cache/modelscope \ -v ./om_cache:/root/.cache/openmind \ -v ./data:/app/data \ -v ./output:/app/output \ -v ./saves:/app/saves \ -p 7860:7860 \ -p 8000:8000 \ --device /dev/kfd \ --device /dev/dri \ --shm-size 16G \ --name llamafactory \ llamafactory:latest docker exec -it llamafactory bash ``` </details> <details><summary>Details about volume</summary> - `hf_cache`: Utilize Hugging Face cache on the host machine. Reassignable if a cache already exists in a different directory. - `ms_cache`: Similar to Hugging Face cache but for ModelScope users. - `om_cache`: Similar to Hugging Face cache but for Modelers users. - `data`: Place datasets on this dir of the host machine so that they can be selected on LLaMA Board GUI. - `output`: Set export dir to this location so that the merged result can be accessed directly on the host machine. </details> ### Deploy with OpenAI-style API and vLLM ```bash API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml ``` > [!TIP] > Visit [this page](https://platform.openai.com/docs/api-reference/chat/create) for API document. > > Examples: [Image understanding](scripts/api_example/test_image.py) | [Function calling](scripts/api_example/test_toolcall.py) ### Download from ModelScope Hub If you have trouble with downloading models and datasets from Hugging Face, you can use ModelScope. ```bash export USE_MODELSCOPE_HUB=1 # `set USE_MODELSCOPE_HUB=1` for Windows ``` Train the model by specifying a model ID of the ModelScope Hub as the `model_name_or_path`. You can find a full list of model IDs at [ModelScope Hub](https://modelscope.cn/models), e.g., `LLM-Research/Meta-Llama-3-8B-Instruct`. ### Download from Modelers Hub You can also use Modelers Hub to download models and datasets. ```bash export USE_OPENMIND_HUB=1 # `set USE_OPENMIND_HUB=1` for Windows ``` Train the model by specifying a model ID of the Modelers Hub as the `model_name_or_path`. You can find a full list of model IDs at [Modelers Hub](https://modelers.cn/models), e.g., `TeleAI/TeleChat-7B-pt`. ### Use W&B Logger To use [Weights & Biases](https://wandb.ai) for logging experimental results, you need to add the following arguments to yaml files. ```yaml report_to: wandb run_name: test_run # optional ``` Set `WANDB_API_KEY` to [your key](https://wandb.ai/authorize) when launching training tasks to log in with your W&B account. ## Projects using LLaMA Factory If you have a project that should be incorporated, please contact via email or create a pull request. <details><summary>Click to show</summary> 1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223) 1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092) 1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526) 1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816) 1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710) 1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319) 1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286) 1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904) 1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625) 1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176) 1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187) 1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746) 1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801) 1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809) 1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819) 1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204) 1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714) 1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043) 1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333) 1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419) 1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228) 1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073) 1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541) 1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246) 1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008) 1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443) 1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604) 1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827) 1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167) 1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316) 1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084) 1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836) 1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581) 1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215) 1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621) 1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140) 1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585) 1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760) 1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378) 1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055) 1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739) 1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816) 1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215) 1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30) 1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380) 1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106) 1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136) 1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496) 1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688) 1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955) 1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973) 1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115) 1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815) 1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099) 1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173) 1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074) 1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408) 1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546) 1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695) 1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233) 1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069) 1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25) 1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949) 1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365) 1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470) 1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129) 1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044) 1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756) 1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/) 1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561) 1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637) 1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535) 1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705) 1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137) 1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf) 1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11) 1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23) 1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693) 1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168) 1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/) 1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072) 1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611) 1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: A large language model for Astronomy, based on ChatGLM2-6B and Qwen-14B. 1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: A large language model specialized in Chinese legal domain, based on Baichuan-13B, is capable of retrieving and reasoning on legal knowledge. 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: A large language model specialized in Chinese medical domain, based on Baichuan-7B and ChatGLM-6B. 1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: A series of large language models for Chinese medical domain, based on LLaMA2-7B and Baichuan-13B. 1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**: A series of MBTI Personality large language models, capable of giving any LLM 16 different personality types based on different datasets and training methods. 1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**: A large language model specialized in generate metadata for stable diffusion. [[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) 1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**: A multimodal large language model specialized in Chinese medical domain, based on LLaVA-1.5-7B. 1. **[AutoRE](https://github.com/THUDM/AutoRE)**: A document-level relation extraction system based on large language models. 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**: SDKs for fine-tuning LLMs on Windows PC for NVIDIA RTX. 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**: An easy and lazy way for building multi-agent LLMs applications and supports model fine-tuning via LLaMA Factory. 1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**: A full pipeline for RAG retrieval model fine-tuning, inference, and distillation. [[blog]](https://zhuanlan.zhihu.com/p/987727357) </details> ## License This repository is licensed under the [Apache-2.0 License](LICENSE). Please follow the model licenses to use the corresponding model weights: [Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan) ## Citation If this work is helpful, please kindly cite as: ```bibtex @inproceedings{zheng2024llamafactory, title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models}, author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma}, booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)}, address={Bangkok, Thailand}, publisher={Association for Computational Linguistics}, year={2024}, url={http://arxiv.org/abs/2403.13372} } ``` ## Acknowledgement This repo benefits from [PEFT](https://github.com/huggingface/peft), [TRL](https://github.com/huggingface/trl), [QLoRA](https://github.com/artidoro/qlora) and [FastChat](https://github.com/lm-sys/FastChat). Thanks for their wonderful works. ## Star History ![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 54058 }
![# LLaMA Factory](assets/logo.png) [![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/LLaMA-Factory?style=social)](https://github.com/hiyouga/LLaMA-Factory/stargazers) [![GitHub Code License](https://img.shields.io/github/license/hiyouga/LLaMA-Factory)](LICENSE) [![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/LLaMA-Factory)](https://github.com/hiyouga/LLaMA-Factory/commits/main) [![PyPI](https://img.shields.io/pypi/v/llamafactory)](https://pypi.org/project/llamafactory/) [![Citation](https://img.shields.io/badge/citation-93-green)](#使用了-llama-factory-的项目) [![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/LLaMA-Factory/pulls) [![Discord](https://dcbadge.vercel.app/api/server/rKfvV9r9FK?compact=true&style=flat)](https://discord.gg/rKfvV9r9FK) [![Twitter](https://img.shields.io/twitter/follow/llamafactory_ai)](https://twitter.com/llamafactory_ai) [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing) [![Open in DSW](https://gallery.pai-ml.com/assets/open-in-dsw.svg)](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) [![Spaces](https://img.shields.io/badge/🤗-Open%20in%20Spaces-blue)](https://huggingface.co/spaces/hiyouga/LLaMA-Board) [![Studios](https://img.shields.io/badge/ModelScope-Open%20in%20Studios-blue)](https://modelscope.cn/studios/hiyouga/LLaMA-Board) [![SageMaker](https://img.shields.io/badge/SageMaker-Open%20in%20AWS-blue)](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) [![GitHub Tread](https://trendshift.io/api/badge/repositories/4535)](https://trendshift.io/repositories/4535) 👋 加入我们的[微信群](assets/wechat.jpg)或 [NPU 用户群](assets/wechat_npu.jpg)。 \[ [English](README.md) | 中文 \] **微调大模型可以像这样轻松…** https://github.com/user-attachments/assets/e6ce34b0-52d5-4f3e-a830-592106c4c272 选择你的打开方式: - **入门教程**:https://zhuanlan.zhihu.com/p/695287607 - **框架文档**:https://llamafactory.readthedocs.io/zh-cn/latest/ - **Colab**:https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing - **本地机器**:请见[如何使用](#如何使用) - **PAI-DSW**:[Llama3 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory) | [Qwen2-VL 案例](https://gallery.pai-ml.com/#/preview/deepLearning/nlp/llama_factory_qwen2vl) - **Amazon SageMaker**:[博客](https://aws.amazon.com/cn/blogs/china/a-one-stop-code-free-model-fine-tuning-deployment-platform-based-on-sagemaker-and-llama-factory/) 近期活动: - **2024/10/18-2024/11/30**:使用 PAI+LLaMA Factory 构建个性化导游机器人。[[活动页面]](https://developer.aliyun.com/topic/llamafactory2) > [!NOTE] > 除上述链接以外的其他网站均为未经许可的第三方网站,请小心甄别。 ## 目录 - [项目特色](#项目特色) - [性能指标](#性能指标) - [更新日志](#更新日志) - [模型](#模型) - [训练方法](#训练方法) - [数据集](#数据集) - [软硬件依赖](#软硬件依赖) - [如何使用](#如何使用) - [使用了 LLaMA Factory 的项目](#使用了-llama-factory-的项目) - [协议](#协议) - [引用](#引用) - [致谢](#致谢) ## 项目特色 - **多种模型**:LLaMA、LLaVA、Mistral、Mixtral-MoE、Qwen、Qwen2-VL、Yi、Gemma、Baichuan、ChatGLM、Phi 等等。 - **集成方法**:(增量)预训练、(多模态)指令监督微调、奖励模型训练、PPO 训练、DPO 训练、KTO 训练、ORPO 训练等等。 - **多种精度**:16 比特全参数微调、冻结微调、LoRA 微调和基于 AQLM/AWQ/GPTQ/LLM.int8/HQQ/EETQ 的 2/3/4/5/6/8 比特 QLoRA 微调。 - **先进算法**:[GaLore](https://github.com/jiaweizzhao/GaLore)、[BAdam](https://github.com/Ledzy/BAdam)、[Adam-mini](https://github.com/zyushun/Adam-mini)、DoRA、LongLoRA、LLaMA Pro、Mixture-of-Depths、LoRA+、LoftQ、PiSSA 和 Agent 微调。 - **实用技巧**:[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)、[Unsloth](https://github.com/unslothai/unsloth)、[Liger Kernel](https://github.com/linkedin/Liger-Kernel)、RoPE scaling、NEFTune 和 rsLoRA。 - **实验监控**:LlamaBoard、TensorBoard、Wandb、MLflow 等等。 - **极速推理**:基于 vLLM 的 OpenAI 风格 API、浏览器界面和命令行接口。 ## 性能指标 与 ChatGLM 官方的 [P-Tuning](https://github.com/THUDM/ChatGLM2-6B/tree/main/ptuning) 微调相比,LLaMA Factory 的 LoRA 微调提供了 **3.7 倍**的加速比,同时在广告文案生成任务上取得了更高的 Rouge 分数。结合 4 比特量化技术,LLaMA Factory 的 QLoRA 微调进一步降低了 GPU 显存消耗。 ![benchmark](assets/benchmark.svg) <details><summary>变量定义</summary> - **Training Speed**: 训练阶段每秒处理的样本数量。(批处理大小=4,截断长度=1024) - **Rouge Score**: [广告文案生成](https://aclanthology.org/D19-1321.pdf)任务验证集上的 Rouge-2 分数。(批处理大小=4,截断长度=1024) - **GPU Memory**: 4 比特量化训练的 GPU 显存峰值。(批处理大小=1,截断长度=1024) - 我们在 ChatGLM 的 P-Tuning 中采用 `pre_seq_len=128`,在 LLaMA Factory 的 LoRA 微调中采用 `lora_rank=32`。 </details> ## 更新日志 [24/11/27] 我们支持了 **[Skywork-o1](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)** 模型的微调和 **[OpenO1](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT)** 数据集。 [24/10/09] 我们支持了从 **[魔乐社区](https://modelers.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔乐社区下载)。 [24/09/19] 我们支持了 **[Qwen2.5](https://qwenlm.github.io/blog/qwen2.5/)** 模型的微调。 [24/08/30] 我们支持了 **[Qwen2-VL](https://qwenlm.github.io/blog/qwen2-vl/)** 模型的微调。感谢 [@simonJJJ](https://github.com/simonJJJ) 的 PR。 <details><summary>展开日志</summary> [24/08/27] 我们支持了 **[Liger Kernel](https://github.com/linkedin/Liger-Kernel)**。请使用 `enable_liger_kernel: true` 来加速训练。 [24/08/09] 我们支持了 **[Adam-mini](https://github.com/zyushun/Adam-mini)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。感谢 [@relic-yuexi](https://github.com/relic-yuexi) 的 PR。 [24/07/04] 我们支持了[无污染打包训练](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing)。请使用 `neat_packing: true` 参数。感谢 [@chuan298](https://github.com/chuan298) 的 PR。 [24/06/16] 我们支持了 **[PiSSA](https://arxiv.org/abs/2404.02948)** 算法。详细用法请参照 [examples](examples/README_zh.md)。 [24/06/07] 我们支持了 **[Qwen2](https://qwenlm.github.io/blog/qwen2/)** 和 **[GLM-4](https://github.com/THUDM/GLM-4)** 模型的微调。 [24/05/26] 我们支持了 **[SimPO](https://arxiv.org/abs/2405.14734)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。 [24/05/20] 我们支持了 **PaliGemma** 系列模型的微调。注意 PaliGemma 是预训练模型,你需要使用 `paligemma` 模板进行微调使其获得对话能力。 [24/05/18] 我们支持了 **[KTO](https://arxiv.org/abs/2402.01306)** 偏好对齐算法。详细用法请参照 [examples](examples/README_zh.md)。 [24/05/14] 我们支持了昇腾 NPU 设备的训练和推理。详情请查阅[安装](#安装-llama-factory)部分。 [24/04/26] 我们支持了多模态模型 **LLaVA-1.5** 的微调。详细用法请参照 [examples](examples/README_zh.md)。 [24/04/22] 我们提供了在免费 T4 GPU 上微调 Llama-3 模型的 **[Colab 笔记本](https://colab.research.google.com/drive/1d5KQtbemerlSDSxZIfAaWXhKr30QypiK?usp=sharing)**。Hugging Face 社区公开了两个利用 LLaMA Factory 微调的 Llama-3 模型,详情请见 [Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) 和 [Llama3-Chinese](https://huggingface.co/zhichen/Llama3-Chinese)。 [24/04/21] 我们基于 [AstraMindAI 的仓库](https://github.com/astramind-ai/Mixture-of-depths)支持了 **[混合深度训练](https://arxiv.org/abs/2404.02258)**。详细用法请参照 [examples](examples/README_zh.md)。 [24/04/16] 我们支持了 **[BAdam](https://arxiv.org/abs/2404.02827)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。 [24/04/16] 我们支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的长序列训练(24GB 可训练 Llama-2-7B-56k)。该方法相比 FlashAttention-2 提供了 **117%** 的训练速度和 **50%** 的显存节约。更多数据请见[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。 [24/03/31] 我们支持了 **[ORPO](https://arxiv.org/abs/2403.07691)**。详细用法请参照 [examples](examples/README_zh.md)。 [24/03/21] 我们的论文 "[LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models](https://arxiv.org/abs/2403.13372)" 可在 arXiv 上查看! [24/03/20] 我们支持了能在 2x24GB GPU 上微调 70B 模型的 **FSDP+QLoRA**。详细用法请参照 [examples](examples/README_zh.md)。 [24/03/13] 我们支持了 **[LoRA+](https://arxiv.org/abs/2402.12354)**。详细用法请参照 [examples](examples/README_zh.md)。 [24/03/07] 我们支持了 **[GaLore](https://arxiv.org/abs/2403.03507)** 优化器。详细用法请参照 [examples](examples/README_zh.md)。 [24/03/07] 我们集成了 **[vLLM](https://github.com/vllm-project/vllm)** 以实现极速并发推理。请使用 `infer_backend: vllm` 来获得 **270%** 的推理速度。 [24/02/28] 我们支持了 **[DoRA](https://arxiv.org/abs/2402.09353)** 微调。请使用 `use_dora: true` 参数进行 DoRA 微调。 [24/02/15] 我们支持了 [LLaMA Pro](https://github.com/TencentARC/LLaMA-Pro) 提出的**块扩展**方法。详细用法请参照 [examples](examples/README_zh.md)。 [24/02/05] Qwen1.5(Qwen2 测试版)系列模型已在 LLaMA-Factory 中实现微调支持。详情请查阅该[博客页面](https://qwenlm.github.io/zh/blog/qwen1.5/)。 [24/01/18] 我们针对绝大多数模型实现了 **Agent 微调**,微调时指定 `dataset: glaive_toolcall_zh` 即可使模型获得工具调用能力。 [23/12/23] 我们针对 LLaMA, Mistral 和 Yi 模型支持了 **[unsloth](https://github.com/unslothai/unsloth)** 的 LoRA 训练加速。请使用 `use_unsloth: true` 参数启用 unsloth 优化。该方法可提供 **170%** 的训练速度,详情请查阅[此页面](https://github.com/hiyouga/LLaMA-Factory/wiki/Performance-comparison)。 [23/12/12] 我们支持了微调最新的混合专家模型 **[Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)**。硬件需求请查阅[此处](#硬件依赖)。 [23/12/01] 我们支持了从 **[魔搭社区](https://modelscope.cn/models)** 下载预训练模型和数据集。详细用法请参照 [此教程](#从魔搭社区下载)。 [23/10/21] 我们支持了 **[NEFTune](https://arxiv.org/abs/2310.05914)** 训练技巧。请使用 `neftune_noise_alpha: 5` 参数启用 NEFTune。 [23/09/27] 我们针对 LLaMA 模型支持了 [LongLoRA](https://github.com/dvlab-research/LongLoRA) 提出的 **$S^2$-Attn**。请使用 `shift_attn: true` 参数以启用该功能。 [23/09/23] 我们在项目中集成了 MMLU、C-Eval 和 CMMLU 评估集。详细用法请参照 [examples](examples/README_zh.md)。 [23/09/10] 我们支持了 **[FlashAttention-2](https://github.com/Dao-AILab/flash-attention)**。如果您使用的是 RTX4090、A100 或 H100 GPU,请使用 `flash_attn: fa2` 参数以启用 FlashAttention-2。 [23/08/12] 我们支持了 **RoPE 插值**来扩展 LLaMA 模型的上下文长度。请使用 `rope_scaling: linear` 参数训练模型或使用 `rope_scaling: dynamic` 参数评估模型。 [23/08/11] 我们支持了指令模型的 **[DPO 训练](https://arxiv.org/abs/2305.18290)**。详细用法请参照 [examples](examples/README_zh.md)。 [23/07/31] 我们支持了**数据流式加载**。请使用 `streaming: true` 和 `max_steps: 10000` 参数来流式加载数据集。 [23/07/29] 我们在 Hugging Face 发布了两个 13B 指令微调模型。详细内容请查阅我们的 Hugging Face 项目([LLaMA-2](https://huggingface.co/hiyouga/Llama-2-Chinese-13b-chat) / [Baichuan](https://huggingface.co/hiyouga/Baichuan-13B-sft))。 [23/07/18] 我们开发了支持训练和测试的**浏览器一体化界面**。请使用 `train_web.py` 在您的浏览器中微调模型。感谢 [@KanadeSiina](https://github.com/KanadeSiina) 和 [@codemayq](https://github.com/codemayq) 在该功能开发中付出的努力。 [23/07/09] 我们开源了 **[FastEdit](https://github.com/hiyouga/FastEdit)** ⚡🩹,一个简单易用的、能迅速编辑大模型事实记忆的工具包。如果您感兴趣请关注我们的 [FastEdit](https://github.com/hiyouga/FastEdit) 项目。 [23/06/29] 我们提供了一个**可复现的**指令模型微调示例,详细内容请查阅 [Baichuan-7B-sft](https://huggingface.co/hiyouga/Baichuan-7B-sft)。 [23/06/22] 我们对齐了[示例 API](src/api_demo.py) 与 [OpenAI API](https://platform.openai.com/docs/api-reference/chat) 的格式,您可以将微调模型接入**任意基于 ChatGPT 的应用**中。 [23/06/03] 我们实现了 4 比特的 LoRA 训练(也称 **[QLoRA](https://github.com/artidoro/qlora)**)。详细用法请参照 [examples](examples/README_zh.md)。 </details> ## 模型 | 模型名 | 模型大小 | Template | | ----------------------------------------------------------------- | -------------------------------- | ---------------- | | [Baichuan 2](https://huggingface.co/baichuan-inc) | 7B/13B | baichuan2 | | [BLOOM/BLOOMZ](https://huggingface.co/bigscience) | 560M/1.1B/1.7B/3B/7.1B/176B | - | | [ChatGLM3](https://huggingface.co/THUDM) | 6B | chatglm3 | | [Command R](https://huggingface.co/CohereForAI) | 35B/104B | cohere | | [DeepSeek (Code/MoE)](https://huggingface.co/deepseek-ai) | 7B/16B/67B/236B | deepseek | | [Falcon](https://huggingface.co/tiiuae) | 7B/11B/40B/180B | falcon | | [Gemma/Gemma 2/CodeGemma](https://huggingface.co/google) | 2B/7B/9B/27B | gemma | | [GLM-4](https://huggingface.co/THUDM) | 9B | glm4 | | [Index](https://huggingface.co/IndexTeam) | 1.9B | index | | [InternLM2/InternLM2.5](https://huggingface.co/internlm) | 7B/20B | intern2 | | [Llama](https://github.com/facebookresearch/llama) | 7B/13B/33B/65B | - | | [Llama 2](https://huggingface.co/meta-llama) | 7B/13B/70B | llama2 | | [Llama 3-3.2](https://huggingface.co/meta-llama) | 1B/3B/8B/70B | llama3 | | [Llama 3.2 Vision](https://huggingface.co/meta-llama) | 11B/90B | mllama | | [LLaVA-1.5](https://huggingface.co/llava-hf) | 7B/13B | llava | | [LLaVA-NeXT](https://huggingface.co/llava-hf) | 7B/8B/13B/34B/72B/110B | llava_next | | [LLaVA-NeXT-Video](https://huggingface.co/llava-hf) | 7B/34B | llava_next_video | | [MiniCPM](https://huggingface.co/openbmb) | 1B/2B/4B | cpm/cpm3 | | [Mistral/Mixtral](https://huggingface.co/mistralai) | 7B/8x7B/8x22B | mistral | | [OLMo](https://huggingface.co/allenai) | 1B/7B | - | | [PaliGemma](https://huggingface.co/google) | 3B | paligemma | | [Phi-1.5/Phi-2](https://huggingface.co/microsoft) | 1.3B/2.7B | - | | [Phi-3](https://huggingface.co/microsoft) | 4B/7B/14B | phi | | [Pixtral](https://huggingface.co/mistralai) | 12B | pixtral | | [Qwen/QwQ (1-2.5) (Code/Math/MoE)](https://huggingface.co/Qwen) | 0.5B/1.5B/3B/7B/14B/32B/72B/110B | qwen | | [Qwen2-VL](https://huggingface.co/Qwen) | 2B/7B/72B | qwen2_vl | | [Skywork o1](https://huggingface.co/Skywork) | 8B | skywork_o1 | | [StarCoder 2](https://huggingface.co/bigcode) | 3B/7B/15B | - | | [XVERSE](https://huggingface.co/xverse) | 7B/13B/65B | xverse | | [Yi/Yi-1.5 (Code)](https://huggingface.co/01-ai) | 1.5B/6B/9B/34B | yi | | [Yi-VL](https://huggingface.co/01-ai) | 6B/34B | yi_vl | | [Yuan 2](https://huggingface.co/IEITYuan) | 2B/51B/102B | yuan | > [!NOTE] > 对于所有“基座”(Base)模型,`template` 参数可以是 `default`, `alpaca`, `vicuna` 等任意值。但“对话”(Instruct/Chat)模型请务必使用**对应的模板**。 > > 请务必在训练和推理时采用**完全一致**的模板。 项目所支持模型的完整列表请参阅 [constants.py](src/llamafactory/extras/constants.py)。 您也可以在 [template.py](src/llamafactory/data/template.py) 中添加自己的对话模板。 ## 训练方法 | 方法 | 全参数训练 | 部分参数训练 | LoRA | QLoRA | | ---------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | 预训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | 指令监督微调 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | 奖励模型训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | PPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | DPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | KTO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | ORPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | | SimPO 训练 | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | > [!TIP] > 有关 PPO 的实现细节,请参考[此博客](https://newfacade.github.io/notes-on-reinforcement-learning/17-ppo-trl.html)。 ## 数据集 <details><summary>预训练数据集</summary> - [Wiki Demo (en)](data/wiki_demo.txt) - [RefinedWeb (en)](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) - [RedPajama V2 (en)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-V2) - [Wikipedia (en)](https://huggingface.co/datasets/olm/olm-wikipedia-20221220) - [Wikipedia (zh)](https://huggingface.co/datasets/pleisto/wikipedia-cn-20230720-filtered) - [Pile (en)](https://huggingface.co/datasets/EleutherAI/pile) - [SkyPile (zh)](https://huggingface.co/datasets/Skywork/SkyPile-150B) - [FineWeb (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb) - [FineWeb-Edu (en)](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) - [The Stack (en)](https://huggingface.co/datasets/bigcode/the-stack) - [StarCoder (en)](https://huggingface.co/datasets/bigcode/starcoderdata) </details> <details><summary>指令微调数据集</summary> - [Identity (en&zh)](data/identity.json) - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca) - [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca-3) - [Alpaca GPT4 (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM) - [Glaive Function Calling V2 (en&zh)](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2) - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima) - [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset) - [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN) - [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN) - [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) - [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M) - [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M) - [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M) - [UltraChat (en)](https://github.com/thunlp/UltraChat) - [OpenPlatypus (en)](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k) - [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT) - [OpenOrca (en)](https://huggingface.co/datasets/Open-Orca/OpenOrca) - [SlimOrca (en)](https://huggingface.co/datasets/Open-Orca/SlimOrca) - [MathInstruct (en)](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) - [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M) - [Wiki QA (en)](https://huggingface.co/datasets/wiki_qa) - [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa) - [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn) - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) - [deepctrl (en&zh)](https://www.modelscope.cn/datasets/deepctrl/deepctrl-sft-data) - [Advertise Generating (zh)](https://huggingface.co/datasets/HasturOfficial/adgen) - [ShareGPT Hyperfiltered (en)](https://huggingface.co/datasets/totally-not-an-llm/sharegpt-hyperfiltered-3k) - [ShareGPT4 (en&zh)](https://huggingface.co/datasets/shibing624/sharegpt_gpt4) - [UltraChat 200k (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) - [AgentInstruct (en)](https://huggingface.co/datasets/THUDM/AgentInstruct) - [LMSYS Chat 1M (en)](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) - [Evol Instruct V2 (en)](https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k) - [Cosmopedia (en)](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) - [STEM (zh)](https://huggingface.co/datasets/hfl/stem_zh_instruction) - [Ruozhiba (zh)](https://huggingface.co/datasets/hfl/ruozhiba_gpt4_turbo) - [Neo-sft (zh)](https://huggingface.co/datasets/m-a-p/neo_sft_phase2) - [Magpie-Pro-300K-Filtered (en)](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filtered) - [Magpie-ultra-v0.1 (en)](https://huggingface.co/datasets/argilla/magpie-ultra-v0.1) - [WebInstructSub (en)](https://huggingface.co/datasets/TIGER-Lab/WebInstructSub) - [OpenO1-SFT (en&zh)](https://huggingface.co/datasets/O1-OPEN/OpenO1-SFT) - [LLaVA mixed (en&zh)](https://huggingface.co/datasets/BUAADreamer/llava-en-zh-300k) - [Pokemon-gpt4o-captions (en&zh)](https://huggingface.co/datasets/jugg1024/pokemon-gpt4o-captions) - [Open Assistant (de)](https://huggingface.co/datasets/mayflowergmbh/oasst_de) - [Dolly 15k (de)](https://huggingface.co/datasets/mayflowergmbh/dolly-15k_de) - [Alpaca GPT4 (de)](https://huggingface.co/datasets/mayflowergmbh/alpaca-gpt4_de) - [OpenSchnabeltier (de)](https://huggingface.co/datasets/mayflowergmbh/openschnabeltier_de) - [Evol Instruct (de)](https://huggingface.co/datasets/mayflowergmbh/evol-instruct_de) - [Dolphin (de)](https://huggingface.co/datasets/mayflowergmbh/dolphin_de) - [Booksum (de)](https://huggingface.co/datasets/mayflowergmbh/booksum_de) - [Airoboros (de)](https://huggingface.co/datasets/mayflowergmbh/airoboros-3.0_de) - [Ultrachat (de)](https://huggingface.co/datasets/mayflowergmbh/ultra-chat_de) </details> <details><summary>偏好数据集</summary> - [DPO mixed (en&zh)](https://huggingface.co/datasets/hiyouga/DPO-En-Zh-20k) - [UltraFeedback (en)](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) - [RLHF-V (en)](https://huggingface.co/datasets/openbmb/RLHF-V-Dataset) - [VLFeedback (en)](https://huggingface.co/datasets/Zhihui/VLFeedback) - [Orca DPO Pairs (en)](https://huggingface.co/datasets/Intel/orca_dpo_pairs) - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf) - [Nectar (en)](https://huggingface.co/datasets/berkeley-nest/Nectar) - [Orca DPO (de)](https://huggingface.co/datasets/mayflowergmbh/intel_orca_dpo_pairs_de) - [KTO mixed (en)](https://huggingface.co/datasets/argilla/kto-mix-15k) </details> 部分数据集的使用需要确认,我们推荐使用下述命令登录您的 Hugging Face 账户。 ```bash pip install --upgrade huggingface_hub huggingface-cli login ``` ## 软硬件依赖 | 必需项 | 至少 | 推荐 | | ------------ | ------- | --------- | | python | 3.8 | 3.11 | | torch | 1.13.1 | 2.4.0 | | transformers | 4.41.2 | 4.43.4 | | datasets | 2.16.0 | 2.20.0 | | accelerate | 0.30.1 | 0.32.0 | | peft | 0.11.1 | 0.12.0 | | trl | 0.8.6 | 0.9.6 | | 可选项 | 至少 | 推荐 | | ------------ | ------- | --------- | | CUDA | 11.6 | 12.2 | | deepspeed | 0.10.0 | 0.14.0 | | bitsandbytes | 0.39.0 | 0.43.1 | | vllm | 0.4.3 | 0.5.0 | | flash-attn | 2.3.0 | 2.6.3 | ### 硬件依赖 \* *估算值* | 方法 | 精度 | 7B | 13B | 30B | 70B | 110B | 8x7B | 8x22B | | ----------------- | ---- | ----- | ----- | ----- | ------ | ------ | ----- | ------ | | Full | AMP | 120GB | 240GB | 600GB | 1200GB | 2000GB | 900GB | 2400GB | | Full | 16 | 60GB | 120GB | 300GB | 600GB | 900GB | 400GB | 1200GB | | Freeze | 16 | 20GB | 40GB | 80GB | 200GB | 360GB | 160GB | 400GB | | LoRA/GaLore/BAdam | 16 | 16GB | 32GB | 64GB | 160GB | 240GB | 120GB | 320GB | | QLoRA | 8 | 10GB | 20GB | 40GB | 80GB | 140GB | 60GB | 160GB | | QLoRA | 4 | 6GB | 12GB | 24GB | 48GB | 72GB | 30GB | 96GB | | QLoRA | 2 | 4GB | 8GB | 16GB | 24GB | 48GB | 18GB | 48GB | ## 如何使用 ### 安装 LLaMA Factory > [!IMPORTANT] > 此步骤为必需。 ```bash git clone --depth 1 https://github.com/hiyouga/LLaMA-Factory.git cd LLaMA-Factory pip install -e ".[torch,metrics]" ``` 可选的额外依赖项:torch、torch-npu、metrics、deepspeed、liger-kernel、bitsandbytes、hqq、eetq、gptq、awq、aqlm、vllm、galore、badam、adam-mini、qwen、modelscope、openmind、quality > [!TIP] > 遇到包冲突时,可使用 `pip install --no-deps -e .` 解决。 <details><summary>Windows 用户指南</summary> 如果要在 Windows 平台上开启量化 LoRA(QLoRA),需要安装预编译的 `bitsandbytes` 库, 支持 CUDA 11.1 到 12.2, 请根据您的 CUDA 版本情况选择适合的[发布版本](https://github.com/jllllll/bitsandbytes-windows-webui/releases/tag/wheels)。 ```bash pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.41.2.post2-py3-none-win_amd64.whl ``` 如果要在 Windows 平台上开启 FlashAttention-2,需要安装预编译的 `flash-attn` 库,支持 CUDA 12.1 到 12.2,请根据需求到 [flash-attention](https://github.com/bdashore3/flash-attention/releases) 下载对应版本安装。 </details> <details><summary>昇腾 NPU 用户指南</summary> 在昇腾 NPU 设备上安装 LLaMA Factory 时,需要指定额外依赖项,使用 `pip install -e ".[torch-npu,metrics]"` 命令安装。此外,还需要安装 **[Ascend CANN Toolkit 与 Kernels](https://www.hiascend.com/developer/download/community/result?module=cann)**,安装方法请参考[安装教程](https://www.hiascend.com/document/detail/zh/CANNCommunityEdition/80RC2alpha002/quickstart/quickstart/quickstart_18_0004.html)或使用以下命令: ```bash # 请替换 URL 为 CANN 版本和设备型号对应的 URL # 安装 CANN Toolkit wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run bash Ascend-cann-toolkit_8.0.RC1.alpha001_linux-"$(uname -i)".run --install # 安装 CANN Kernels wget https://ascend-repo.obs.cn-east-2.myhuaweicloud.com/Milan-ASL/Milan-ASL%20V100R001C17SPC701/Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run bash Ascend-cann-kernels-910b_8.0.RC1.alpha001_linux.run --install # 设置环境变量 source /usr/local/Ascend/ascend-toolkit/set_env.sh ``` | 依赖项 | 至少 | 推荐 | | ------------ | ------- | ----------- | | CANN | 8.0.RC1 | 8.0.RC1 | | torch | 2.1.0 | 2.1.0 | | torch-npu | 2.1.0 | 2.1.0.post3 | | deepspeed | 0.13.2 | 0.13.2 | 请使用 `ASCEND_RT_VISIBLE_DEVICES` 而非 `CUDA_VISIBLE_DEVICES` 来指定运算设备。 如果遇到无法正常推理的情况,请尝试设置 `do_sample: false`。 下载预构建 Docker 镜像:[32GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/130.html) | [64GB](http://mirrors.cn-central-221.ovaijisuan.com/detail/131.html) </details> ### 数据准备 关于数据集文件的格式,请参考 [data/README_zh.md](data/README_zh.md) 的内容。你可以使用 HuggingFace / ModelScope / Modelers 上的数据集或加载本地数据集。 > [!NOTE] > 使用自定义数据集时,请更新 `data/dataset_info.json` 文件。 ### 快速开始 下面三行命令分别对 Llama3-8B-Instruct 模型进行 LoRA **微调**、**推理**和**合并**。 ```bash llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml llamafactory-cli chat examples/inference/llama3_lora_sft.yaml llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml ``` 高级用法请参考 [examples/README_zh.md](examples/README_zh.md)(包括多 GPU 微调)。 > [!TIP] > 使用 `llamafactory-cli help` 显示帮助信息。 ### LLaMA Board 可视化微调(由 [Gradio](https://github.com/gradio-app/gradio) 驱动) ```bash llamafactory-cli webui ``` ### 构建 Docker CUDA 用户: ```bash cd docker/docker-cuda/ docker compose up -d docker compose exec llamafactory bash ``` 昇腾 NPU 用户: ```bash cd docker/docker-npu/ docker compose up -d docker compose exec llamafactory bash ``` AMD ROCm 用户: ```bash cd docker/docker-rocm/ docker compose up -d docker compose exec llamafactory bash ``` <details><summary>不使用 Docker Compose 构建</summary> CUDA 用户: ```bash docker build -f ./docker/docker-cuda/Dockerfile \ --build-arg INSTALL_BNB=false \ --build-arg INSTALL_VLLM=false \ --build-arg INSTALL_DEEPSPEED=false \ --build-arg INSTALL_FLASHATTN=false \ --build-arg PIP_INDEX=https://pypi.org/simple \ -t llamafactory:latest . docker run -dit --gpus=all \ -v ./hf_cache:/root/.cache/huggingface \ -v ./ms_cache:/root/.cache/modelscope \ -v ./om_cache:/root/.cache/openmind \ -v ./data:/app/data \ -v ./output:/app/output \ -p 7860:7860 \ -p 8000:8000 \ --shm-size 16G \ --name llamafactory \ llamafactory:latest docker exec -it llamafactory bash ``` 昇腾 NPU 用户: ```bash # 根据您的环境选择镜像 docker build -f ./docker/docker-npu/Dockerfile \ --build-arg INSTALL_DEEPSPEED=false \ --build-arg PIP_INDEX=https://pypi.org/simple \ -t llamafactory:latest . # 根据您的资源更改 `device` docker run -dit \ -v ./hf_cache:/root/.cache/huggingface \ -v ./ms_cache:/root/.cache/modelscope \ -v ./om_cache:/root/.cache/openmind \ -v ./data:/app/data \ -v ./output:/app/output \ -v /usr/local/dcmi:/usr/local/dcmi \ -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ -v /usr/local/Ascend/driver:/usr/local/Ascend/driver \ -v /etc/ascend_install.info:/etc/ascend_install.info \ -p 7860:7860 \ -p 8000:8000 \ --device /dev/davinci0 \ --device /dev/davinci_manager \ --device /dev/devmm_svm \ --device /dev/hisi_hdc \ --shm-size 16G \ --name llamafactory \ llamafactory:latest docker exec -it llamafactory bash ``` AMD ROCm 用户: ```bash docker build -f ./docker/docker-rocm/Dockerfile \ --build-arg INSTALL_BNB=false \ --build-arg INSTALL_VLLM=false \ --build-arg INSTALL_DEEPSPEED=false \ --build-arg INSTALL_FLASHATTN=false \ --build-arg PIP_INDEX=https://pypi.org/simple \ -t llamafactory:latest . docker run -dit \ -v ./hf_cache:/root/.cache/huggingface \ -v ./ms_cache:/root/.cache/modelscope \ -v ./om_cache:/root/.cache/openmind \ -v ./data:/app/data \ -v ./output:/app/output \ -v ./saves:/app/saves \ -p 7860:7860 \ -p 8000:8000 \ --device /dev/kfd \ --device /dev/dri \ --shm-size 16G \ --name llamafactory \ llamafactory:latest docker exec -it llamafactory bash ``` </details> <details><summary>数据卷详情</summary> - `hf_cache`:使用宿主机的 Hugging Face 缓存文件夹,允许更改为新的目录。 - `ms_cache`:类似 Hugging Face 缓存文件夹,为 ModelScope 用户提供。 - `om_cache`:类似 Hugging Face 缓存文件夹,为 Modelers 用户提供。 - `data`:宿主机中存放数据集的文件夹路径。 - `output`:将导出目录设置为该路径后,即可在宿主机中访问导出后的模型。 </details> ### 利用 vLLM 部署 OpenAI API ```bash API_PORT=8000 llamafactory-cli api examples/inference/llama3_vllm.yaml ``` > [!TIP] > API 文档请查阅[这里](https://platform.openai.com/docs/api-reference/chat/create)。 > > 示例:[图像理解](scripts/api_example/test_image.py) | [工具调用](scripts/api_example/test_toolcall.py) ### 从魔搭社区下载 如果您在 Hugging Face 模型和数据集的下载中遇到了问题,可以通过下述方法使用魔搭社区。 ```bash export USE_MODELSCOPE_HUB=1 # Windows 使用 `set USE_MODELSCOPE_HUB=1` ``` 将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔搭社区](https://modelscope.cn/models)查看所有可用的模型,例如 `LLM-Research/Meta-Llama-3-8B-Instruct`。 ### 从魔乐社区下载 您也可以通过下述方法,使用魔乐社区下载数据集和模型。 ```bash export USE_OPENMIND_HUB=1 # Windows 使用 `set USE_OPENMIND_HUB=1` ``` 将 `model_name_or_path` 设置为模型 ID 来加载对应的模型。在[魔乐社区](https://modelers.cn/models)查看所有可用的模型,例如 `TeleAI/TeleChat-7B-pt`。 ### 使用 W&B 面板 若要使用 [Weights & Biases](https://wandb.ai) 记录实验数据,请在 yaml 文件中添加下面的参数。 ```yaml report_to: wandb run_name: test_run # 可选 ``` 在启动训练任务时,将 `WANDB_API_KEY` 设置为[密钥](https://wandb.ai/authorize)来登录 W&B 账户。 ## 使用了 LLaMA Factory 的项目 如果您有项目希望添加至下述列表,请通过邮件联系或者创建一个 PR。 <details><summary>点击显示</summary> 1. Wang et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation. 2023. [[arxiv]](https://arxiv.org/abs/2308.02223) 1. Yu et al. Open, Closed, or Small Language Models for Text Classification? 2023. [[arxiv]](https://arxiv.org/abs/2308.10092) 1. Wang et al. UbiPhysio: Support Daily Functioning, Fitness, and Rehabilitation with Action Understanding and Feedback in Natural Language. 2023. [[arxiv]](https://arxiv.org/abs/2308.10526) 1. Luceri et al. Leveraging Large Language Models to Detect Influence Campaigns in Social Media. 2023. [[arxiv]](https://arxiv.org/abs/2311.07816) 1. Zhang et al. Alleviating Hallucinations of Large Language Models through Induced Hallucinations. 2023. [[arxiv]](https://arxiv.org/abs/2312.15710) 1. Wang et al. Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMs. KDD 2024. [[arxiv]](https://arxiv.org/abs/2401.04319) 1. Wang et al. CANDLE: Iterative Conceptualization and Instantiation Distillation from Large Language Models for Commonsense Reasoning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2401.07286) 1. Choi et al. FACT-GPT: Fact-Checking Augmentation via Claim Matching with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2402.05904) 1. Zhang et al. AutoMathText: Autonomous Data Selection with Language Models for Mathematical Texts. 2024. [[arxiv]](https://arxiv.org/abs/2402.07625) 1. Lyu et al. KnowTuning: Knowledge-aware Fine-tuning for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11176) 1. Yang et al. LaCo: Large Language Model Pruning via Layer Collaps. 2024. [[arxiv]](https://arxiv.org/abs/2402.11187) 1. Bhardwaj et al. Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic. 2024. [[arxiv]](https://arxiv.org/abs/2402.11746) 1. Yang et al. Enhancing Empathetic Response Generation by Augmenting LLMs with Small-scale Empathetic Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11801) 1. Yi et al. Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2402.11809) 1. Cao et al. Head-wise Shareable Attention for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.11819) 1. Zhang et al. Enhancing Multilingual Capabilities of Large Language Models through Self-Distillation from Resource-Rich Languages. 2024. [[arxiv]](https://arxiv.org/abs/2402.12204) 1. Kim et al. Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2402.14714) 1. Yu et al. KIEval: A Knowledge-grounded Interactive Evaluation Framework for Large Language Models. ACL 2024. [[arxiv]](https://arxiv.org/abs/2402.15043) 1. Huang et al. Key-Point-Driven Data Synthesis with its Enhancement on Mathematical Reasoning. 2024. [[arxiv]](https://arxiv.org/abs/2403.02333) 1. Duan et al. Negating Negatives: Alignment without Human Positive Samples via Distributional Dispreference Optimization. 2024. [[arxiv]](https://arxiv.org/abs/2403.03419) 1. Xie and Schwertfeger. Empowering Robotics with Large Language Models: osmAG Map Comprehension with LLMs. 2024. [[arxiv]](https://arxiv.org/abs/2403.08228) 1. Wu et al. Large Language Models are Parallel Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2403.09073) 1. Zhang et al. EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling. 2024. [[arxiv]](https://arxiv.org/abs/2403.14541) 1. Weller et al. FollowIR: Evaluating and Teaching Information Retrieval Models to Follow Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2403.15246) 1. Hongbin Na. CBT-LLM: A Chinese Large Language Model for Cognitive Behavioral Therapy-based Mental Health Question Answering. COLING 2024. [[arxiv]](https://arxiv.org/abs/2403.16008) 1. Zan et al. CodeS: Natural Language to Code Repository via Multi-Layer Sketch. 2024. [[arxiv]](https://arxiv.org/abs/2403.16443) 1. Liu et al. Extensive Self-Contrast Enables Feedback-Free Language Model Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2404.00604) 1. Luo et al. BAdam: A Memory Efficient Full Parameter Training Method for Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.02827) 1. Du et al. Chinese Tiny LLM: Pretraining a Chinese-Centric Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2404.04167) 1. Ma et al. Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation. ICML 2024. [[arxiv]](https://arxiv.org/abs/2404.04316) 1. Liu et al. Dynamic Generation of Personalities with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.07084) 1. Shang et al. How Far Have We Gone in Stripped Binary Code Understanding Using Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.09836) 1. Huang et al. LLMTune: Accelerate Database Knob Tuning with Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2404.11581) 1. Deng et al. Text-Tuple-Table: Towards Information Integration in Text-to-Table Generation via Global Tuple Extraction. 2024. [[arxiv]](https://arxiv.org/abs/2404.14215) 1. Acikgoz et al. Hippocrates: An Open-Source Framework for Advancing Large Language Models in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2404.16621) 1. Zhang et al. Small Language Models Need Strong Verifiers to Self-Correct Reasoning. ACL 2024 Findings. [[arxiv]](https://arxiv.org/abs/2404.17140) 1. Zhou et al. FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering. NAACL 2024. [[arxiv]](https://arxiv.org/abs/2404.18585) 1. Xu et al. Large Language Models for Cyber Security: A Systematic Literature Review. 2024. [[arxiv]](https://arxiv.org/abs/2405.04760) 1. Dammu et al. "They are uncultured": Unveiling Covert Harms and Social Threats in LLM Generated Conversations. 2024. [[arxiv]](https://arxiv.org/abs/2405.05378) 1. Yi et al. A safety realignment framework via subspace-oriented model fusion for large language models. 2024. [[arxiv]](https://arxiv.org/abs/2405.09055) 1. Lou et al. SPO: Multi-Dimensional Preference Sequential Alignment With Implicit Reward Modeling. 2024. [[arxiv]](https://arxiv.org/abs/2405.12739) 1. Zhang et al. Getting More from Less: Large Language Models are Good Spontaneous Multilingual Learners. 2024. [[arxiv]](https://arxiv.org/abs/2405.13816) 1. Zhang et al. TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models. 2024. [[arxiv]](https://arxiv.org/abs/2405.20215) 1. Zihong Chen. Sentence Segmentation and Sentence Punctuation Based on XunziALLM. 2024. [[paper]](https://aclanthology.org/2024.lt4hala-1.30) 1. Gao et al. The Best of Both Worlds: Toward an Honest and Helpful Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2406.00380) 1. Wang and Song. MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset. 2024. [[arxiv]](https://arxiv.org/abs/2406.02106) 1. Hu et al. Computational Limits of Low-Rank Adaptation (LoRA) for Transformer-Based Models. 2024. [[arxiv]](https://arxiv.org/abs/2406.03136) 1. Ge et al. Time Sensitive Knowledge Editing through Efficient Finetuning. ACL 2024. [[arxiv]](https://arxiv.org/abs/2406.04496) 1. Tan et al. Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions. 2024. [[arxiv]](https://arxiv.org/abs/2406.05688) 1. Song et al. Turbo Sparse: Achieving LLM SOTA Performance with Minimal Activated Parameters. 2024. [[arxiv]](https://arxiv.org/abs/2406.05955) 1. Gu et al. RWKV-CLIP: A Robust Vision-Language Representation Learner. 2024. [[arxiv]](https://arxiv.org/abs/2406.06973) 1. Chen et al. Advancing Tool-Augmented Large Language Models: Integrating Insights from Errors in Inference Trees. 2024. [[arxiv]](https://arxiv.org/abs/2406.07115) 1. Zhu et al. Are Large Language Models Good Statisticians?. 2024. [[arxiv]](https://arxiv.org/abs/2406.07815) 1. Li et al. Know the Unknown: An Uncertainty-Sensitive Method for LLM Instruction Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2406.10099) 1. Ding et al. IntentionQA: A Benchmark for Evaluating Purchase Intention Comprehension Abilities of Language Models in E-commerce. 2024. [[arxiv]](https://arxiv.org/abs/2406.10173) 1. He et al. COMMUNITY-CROSS-INSTRUCT: Unsupervised Instruction Generation for Aligning Large Language Models to Online Communities. 2024. [[arxiv]](https://arxiv.org/abs/2406.12074) 1. Lin et al. FVEL: Interactive Formal Verification Environment with Large Language Models via Theorem Proving. 2024. [[arxiv]](https://arxiv.org/abs/2406.14408) 1. Treutlein et al. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data. 2024. [[arxiv]](https://arxiv.org/abs/2406.14546) 1. Feng et al. SS-Bench: A Benchmark for Social Story Generation and Evaluation. 2024. [[arxiv]](https://arxiv.org/abs/2406.15695) 1. Feng et al. Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement. 2024. [[arxiv]](https://arxiv.org/abs/2406.17233) 1. Liu et al. Large Language Models for Cuffless Blood Pressure Measurement From Wearable Biosignals. 2024. [[arxiv]](https://arxiv.org/abs/2406.18069) 1. Iyer et al. Exploring Very Low-Resource Translation with LLMs: The University of Edinburgh's Submission to AmericasNLP 2024 Translation Task. AmericasNLP 2024. [[paper]](https://aclanthology.org/2024.americasnlp-1.25) 1. Li et al. Calibrating LLMs with Preference Optimization on Thought Trees for Generating Rationale in Science Question Scoring. 2024. [[arxiv]](https://arxiv.org/abs/2406.19949) 1. Yang et al. Financial Knowledge Large Language Model. 2024. [[arxiv]](https://arxiv.org/abs/2407.00365) 1. Lin et al. DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging. 2024. [[arxiv]](https://arxiv.org/abs/2407.01470) 1. Bako et al. Evaluating the Semantic Profiling Abilities of LLMs for Natural Language Utterances in Data Visualization. 2024. [[arxiv]](https://arxiv.org/abs/2407.06129) 1. Huang et al. RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization. 2024. [[arxiv]](https://arxiv.org/abs/2407.08044) 1. Jiang et al. LLM-Collaboration on Automatic Science Journalism for the General Audience. 2024. [[arxiv]](https://arxiv.org/abs/2407.09756) 1. Inouye et al. Applied Auto-tuning on LoRA Hyperparameters. 2024. [[paper]](https://scholarcommons.scu.edu/cseng_senior/272/) 1. Qi et al. Research on Tibetan Tourism Viewpoints information generation system based on LLM. 2024. [[arxiv]](https://arxiv.org/abs/2407.13561) 1. Xu et al. Course-Correction: Safety Alignment Using Synthetic Preferences. 2024. [[arxiv]](https://arxiv.org/abs/2407.16637) 1. Sun et al. LAMBDA: A Large Model Based Data Agent. 2024. [[arxiv]](https://arxiv.org/abs/2407.17535) 1. Zhu et al. CollectiveSFT: Scaling Large Language Models for Chinese Medical Benchmark with Collective Instructions in Healthcare. 2024. [[arxiv]](https://arxiv.org/abs/2407.19705) 1. Yu et al. Correcting Negative Bias in Large Language Models through Negative Attention Score Alignment. 2024. [[arxiv]](https://arxiv.org/abs/2408.00137) 1. Xie et al. The Power of Personalized Datasets: Advancing Chinese Composition Writing for Elementary School through Targeted Model Fine-Tuning. IALP 2024. [[paper]](https://www.asianlp.sg/conferences/ialp2024/proceedings/papers/IALP2024_P055.pdf) 1. Liu et al. Instruct-Code-Llama: Improving Capabilities of Language Model in Competition Level Code Generation by Online Judge Feedback. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_11) 1. Wang et al. Cybernetic Sentinels: Unveiling the Impact of Safety Data Selection on Model Security in Supervised Fine-Tuning. ICIC 2024. [[paper]](https://link.springer.com/chapter/10.1007/978-981-97-5669-8_23) 1. Xia et al. Understanding the Performance and Estimating the Cost of LLM Fine-Tuning. 2024. [[arxiv]](https://arxiv.org/abs/2408.04693) 1. Zeng et al. Perceive, Reflect, and Plan: Designing LLM Agent for Goal-Directed City Navigation without Instructions. 2024. [[arxiv]](https://arxiv.org/abs/2408.04168) 1. Xia et al. Using Pre-trained Language Model for Accurate ESG Prediction. FinNLP 2024. [[paper]](https://aclanthology.org/2024.finnlp-2.1/) 1. Liang et al. I-SHEEP: Self-Alignment of LLM from Scratch through an Iterative Self-Enhancement Paradigm. 2024. [[arxiv]](https://arxiv.org/abs/2408.08072) 1. Bai et al. Aligning Large Language Model with Direct Multi-Preference Optimization for Recommendation. CIKM 2024. [[paper]](https://dl.acm.org/doi/10.1145/3627673.3679611) 1. **[StarWhisper](https://github.com/Yu-Yang-Li/StarWhisper)**: 天文大模型 StarWhisper,基于 ChatGLM2-6B 和 Qwen-14B 在天文数据上微调而得。 1. **[DISC-LawLLM](https://github.com/FudanDISC/DISC-LawLLM)**: 中文法律领域大模型 DISC-LawLLM,基于 Baichuan-13B 微调而得,具有法律推理和知识检索能力。 1. **[Sunsimiao](https://github.com/X-D-Lab/Sunsimiao)**: 孙思邈中文医疗大模型 Sumsimiao,基于 Baichuan-7B 和 ChatGLM-6B 在中文医疗数据上微调而得。 1. **[CareGPT](https://github.com/WangRongsheng/CareGPT)**: 医疗大模型项目 CareGPT,基于 LLaMA2-7B 和 Baichuan-13B 在中文医疗数据上微调而得。 1. **[MachineMindset](https://github.com/PKU-YuanGroup/Machine-Mindset/)**:MBTI性格大模型项目,根据数据集与训练方式让任意 LLM 拥有 16 个不同的性格类型。 1. **[Luminia-13B-v3](https://huggingface.co/Nekochu/Luminia-13B-v3)**:一个用于生成 Stable Diffusion 提示词的大型语言模型。[[demo]](https://huggingface.co/spaces/Nekochu/Luminia-13B_SD_Prompt) 1. **[Chinese-LLaVA-Med](https://github.com/BUAADreamer/Chinese-LLaVA-Med)**:中文多模态医学大模型,基于 LLaVA-1.5-7B 在中文多模态医疗数据上微调而得。 1. **[AutoRE](https://github.com/THUDM/AutoRE)**:基于大语言模型的文档级关系抽取系统。 1. **[NVIDIA RTX AI Toolkit](https://github.com/NVIDIA/RTX-AI-Toolkit)**:在 Windows 主机上利用英伟达 RTX 设备进行大型语言模型微调的开发包。 1. **[LazyLLM](https://github.com/LazyAGI/LazyLLM)**:一个低代码构建多 Agent 大模型应用的开发工具,支持基于 LLaMA Factory 的模型微调. 1. **[RAG-Retrieval](https://github.com/NLPJCL/RAG-Retrieval)**:一个全链路 RAG 检索模型微调、推理和蒸馏代码库。[[blog]](https://zhuanlan.zhihu.com/p/987727357) </details> ## 协议 本仓库的代码依照 [Apache-2.0](LICENSE) 协议开源。 使用模型权重时,请遵循对应的模型协议:[Baichuan 2](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/blob/main/Community%20License%20for%20Baichuan%202%20Model.pdf) / [BLOOM](https://huggingface.co/spaces/bigscience/license) / [ChatGLM3](https://github.com/THUDM/ChatGLM3/blob/main/MODEL_LICENSE) / [Command R](https://cohere.com/c4ai-cc-by-nc-license) / [DeepSeek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/main/LICENSE-MODEL) / [Falcon](https://huggingface.co/tiiuae/falcon-180B/blob/main/LICENSE.txt) / [Gemma](https://ai.google.dev/gemma/terms) / [GLM-4](https://huggingface.co/THUDM/glm-4-9b/blob/main/LICENSE) / [Index](https://huggingface.co/IndexTeam/Index-1.9B/blob/main/LICENSE) / [InternLM2](https://github.com/InternLM/InternLM#license) / [Llama](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) / [Llama 2 (LLaVA-1.5)](https://ai.meta.com/llama/license/) / [Llama 3](https://llama.meta.com/llama3/license/) / [MiniCPM](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md) / [Mistral/Mixtral/Pixtral](LICENSE) / [OLMo](LICENSE) / [Phi-1.5/Phi-2](https://huggingface.co/microsoft/phi-1_5/resolve/main/Research%20License.docx) / [Phi-3](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/LICENSE) / [Qwen](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT) / [Skywork](https://huggingface.co/Skywork/Skywork-13B-base/blob/main/Skywork%20Community%20License.pdf) / [StarCoder 2](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) / [XVERSE](https://github.com/xverse-ai/XVERSE-13B/blob/main/MODEL_LICENSE.pdf) / [Yi](https://huggingface.co/01-ai/Yi-6B/blob/main/LICENSE) / [Yi-1.5](LICENSE) / [Yuan 2](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan) ## 引用 如果您觉得此项目有帮助,请考虑以下列格式引用 ```bibtex @inproceedings{zheng2024llamafactory, title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models}, author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma}, booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)}, address={Bangkok, Thailand}, publisher={Association for Computational Linguistics}, year={2024}, url={http://arxiv.org/abs/2403.13372} } ``` ## 致谢 本项目受益于 [PEFT](https://github.com/huggingface/peft)、[TRL](https://github.com/huggingface/trl)、[QLoRA](https://github.com/artidoro/qlora) 和 [FastChat](https://github.com/lm-sys/FastChat),感谢以上诸位作者的付出。 ## Star History ![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/LLaMA-Factory&type=Date)
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/README_zh.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/README_zh.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 47465 }
Extend to other RL(HF) algorithms ================================= We already implemented the complete training pipeline of the PPO algorithms. To extend to other algorithms, we analyze the high-level principle to use veRL and provide a tutorial to implement the DPO algorithm. Users can follow the similar paradigm to extend to other RL algorithms. .. note:: **Key ideas**: Single process drives multi-process computation and data communication. Overall Approach ---------------- Step 1: Consider what multi-machine multi-GPU computations are needed for each model, such as ``generate_sequence`` , ``compute_log_prob`` and ``update_policy`` in the actor_rollout model. Implement distributed single-process-multiple-data (SPMD) computation and encapsulate them into APIs Step 2: Based on different distributed scenarios, including FSDP and 3D parallelism in Megatron-LM, implement single-process control of data interaction among multi-process computations. Step 3: Utilize the encapsulated APIs to implement the control flow Example: Online DPO ------------------- We use veRL to implement a simple online DPO algorithm. The algorithm flow of Online DPO is as follows: 1. There is a prompt (rollout) generator which has the same weight as the actor model. After a batch of prompts are fed into the generator, it generates N responses for each prompt. 2. Send all the prompts + responses to a verifier for scoring, which can be reward model or a rule-based function. Then sort them in pairs to form a training batch. 3. Use this training batch to train the actor model using DPO. During the process, a reference policy is needed. Step 1: What are the multi-machine multi-GPU computations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **Sample Generator** Implementation details: .. code:: python from verl.single_controller.base import Worker from verl.single_controller.ray import RayWorkerGroup, RayClassWithInitArgs, RayResourcePool import ray @ray.remote class SampleGenerator(Worker): def __init__(self, config): super().__init__() self.config = config def generate_sequences(self, data): pass Here, ``SampleGenerator`` can be viewed as a multi-process pulled up by ``torchrun``, with each process running the same code (SPMD). ``SampleGenerator`` needs to implement a ``generate_sequences`` API for the control flow to call. The implementation details inside can use any inference engine including vllm, sglang and huggingface. Users can largely reuse the code in verl/verl/trainer/ppo/rollout/vllm_rollout/vllm_rollout.py and we won't go into details here. **ReferencePolicy inference** API: compute reference log probability .. code:: python from verl.single_controller.base import Worker import ray @ray.remote class ReferencePolicy(Worker): def __init__(self): super().__init__() self.model = Model() def infer(self, data): return self.model(data) **Actor update** API: Update actor model parameters .. code:: python from verl.single_controller.base import Worker import ray @ray.remote class DPOActor(Worker): def __init__(self): super().__init__() self.model = Model() self.model = FSDP(self.model) # or other distributed strategy self.optimizer = optim.Adam(self.model.parameters(), lr=1e-3) self.loss_fn = xxx def update(self, data): self.optimizer.zero_grad() logits = self.model(data) loss = self.loss_fn(logits) loss.backward() self.optimizer.step() **Notes: How to distinguish between control processes and distributed computation processes** ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - Control processes are generally functions directly decorated with ``@ray.remote`` - Computation processes are all wrapped into a ``RayWorkerGroup``. Users can reuse most of the distribtued computation logics implemented in PPO algorithm, including FSDP and Megatron-LM backend in verl/verl/trainer/ppo. Step 2: Based on different distributed scenarios, implement single-process control of multi-process data interaction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ **The core problem to solve here is how a single process sends data to multiple processes, drives multi-process computation, and how the control process obtains the results of multi-process computation.** First, we initialize the multi-process ``WorkerGroup`` in the control process. .. code:: python @ray.remote(num_cpus=1) def main_task(config): # construct SampleGenerator resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) # put SampleGenerator onto resource pool worker_group = RayWorkerGroup(resource_pool, ray_cls) # construct reference policy As we can see, in the control process, multiple processes are wrapped into a ``RayWorkerGroup``. Inside this ``WorkerGroup``, there is a ``self._workers`` member, where each worker is a RayActor (https://docs.ray.io/en/latest/ray-core/actors.html) of SampleGenerator. ray_trainer.md also provide an implementation of ``MegatronRayWorkerGroup``. Assuming the model is distributed using FSDP, and there is a batch of data on the control process, for data parallelism, the underlying calling process is: .. code:: python data = xxx data_list = data.chunk(dp_size) output = [] for d in data_list: # worker_group._workers[i] is a SampleGenerator output.append(worker_group._workers[i].generate_sequences.remote(d)) output = ray.get(output) output = torch.cat(output) Single process calling multiple processes involves the following 3 steps: 1. Split the data into DP parts on the control process. 2. Send the data to remote, call the remote computation through RPC, and utilize multi-process computation. 3. Obtain the computation results of each worker on the control process and merge them. Frequently calling these 3 steps on the controller process greatly hurts code readability. **In veRL, we have abstracted and encapsulated these 3 steps, so that the worker's method + dispatch + collect can be registered into the worker_group** .. code:: python from verl.single_controller.base.decorator import register def dispatch_data(worker_group, data): return data.chunk(worker_group.world_size) def collect_data(worker_group, data): return torch.cat(data) dispatch_mode = { 'dispatch_fn': dispatch_data, 'collect_fn': collect_data } @register(dispatch_mode=dispatch_mode) def generate_sequences(self, data): pass In this way, we can directly call the method inside the worker through the ``worker_group`` on the control (driver) process (which is a single process): .. code:: python output = worker_group.generate_sequences(data) This single line includes data splitting, data distribution and computation, and data collection. Furthermore, the model parallelism size of each model is usually fixed, including dp, tp, pp. So for these common distributed scenarios, we have pre-implemented specific dispatch and collect methods,in `decorator.py <https://github.com/volcengine/verl/blob/main/verl/single_controller/base/decorator.py>`_, which can be directly used to wrap the computations. .. code:: python from verl.single_controller.base.decorator import register, Dispatch @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def generate_sequences(self, data: DataProto) -> DataProto: pass Here it requires the data interface to be ``DataProto``. Definition of ``DataProto`` is in `protocol.py <https://github.com/volcengine/verl/blob/main/verl/protocol.py>`_. Step 3: Main training loop ~~~~~~~~~~~~~~~~~~~~~~~~~~ With the above training flows, we can implement the algorithm's control flow. It is recommended that ``main_task`` is also a ray remote process. .. code:: python @ray.remote(num_cpus=1) def main_task(config): # construct SampleGenerator resource_pool = RayResourcePool(process_on_nodes=[8] * 2) # 16 GPUs ray_cls = RayClassWithInitArgs(SampleGenerator, config=config) # put SampleGenerator onto resource pool sample_gen = RayWorkerGroup(resource_pool, ray_cls) # construct reference policy ray_cls = RayClassWithInitArgs(ReferencePolicy) ref_policy = RayWorkerGroup(resource_pool, ray_cls) # construct actor ray_cls = RayClassWithInitArgs(DPOActor) dpo_policy = RayWorkerGroup(resource_pool, ray_cls) dataloader = DataLoader() for data in dataloader: # generate data data = sample_gen.generate_sequences(data) # generate scores for each data data = generate_scores(data) # generate pairwise data using scores data = generate_pairwise_data(data) # generate ref_log_prob data.batch['ref_log_prob'] = ref_policy.infer(data) # update using dpo dpo_policy.update(data) # logging Here, different ``WorkerGroups`` can be placed in the same resource pool or in different resource pools using ``create_colocated_worker_cls`` similar as in `ray_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py>`_.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/advance/dpo_extension.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/advance/dpo_extension.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 9680 }
Add models to FSDP backend =========================== Model -------------------------- In principle, our FSDP backend can support any HF model and we can sychronoize the actor model weight with vLLM using `hf_weight_loader.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_5_4/hf_weight_loader.py>`_. However, ``hf_weight_loader`` is will gather the full state_dict of a model during synchronization, which may cause OOM. We suggest using ``dtensor_weight_loader`` which gather the full model parameter layer by layer to reduce the peak memory usage. We already support dtensor weight loader for the models below in `dtensor_weight_loader.py <https://github.com/volcengine/verl/blob/main/verl/third_party/vllm/vllm_v_0_5_4/dtensor_weight_loaders.py>`_.: - ``GPT2LMHeadModel`` - ``LlamaForCausalLM`` - ``LLaMAForCausalLM`` - ``MistralForCausalLM`` - ``InternLMForCausalLM`` - ``AquilaModel`` - ``AquilaForCausalLM`` - ``Phi3ForCausalLM`` - ``GemmaForCausalLM`` - ``Gemma2ForCausalLM`` - ``GPTBigCodeForCausalLM`` - ``Starcoder2ForCausalLM`` - ``Qwen2ForCausalLM`` - ``DeepseekV2ForCausalLM`` To implement ``dtensor_weight_loader`` of a model that's supported in vLLM, follow the guide of gemma model below: 1. Copy the ``load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]])`` from the vllm model class to ``dtensor_weight_loaders.py`` 2. Modify the arguments to ``(actor_weights: Dict, vllm_model: nn.Module)`` 3. Replace the ``self`` to ``vllm_model`` 4. Add the ``local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight)`` before each ``param = params_dict[name]`` and modify the following weight loading using ``local_loaded_weight``. 5. Register the implemented dtensor weight loader to ``__MODEL_DTENSOR_WEIGHT_LOADER_REGISTRY__``. .. code-block:: diff - def load_weights(self, weights: Iterable[Tuple[str, torch.Tensor]]): + def gemma_dtensor_weight_loader(actor_weights: Dict, vllm_model: nn.Module) -> nn.Module: stacked_params_mapping = [ # (param_name, shard_name, shard_id) ("qkv_proj", "q_proj", "q"), ("qkv_proj", "k_proj", "k"), ("qkv_proj", "v_proj", "v"), ("gate_up_proj", "gate_proj", 0), ("gate_up_proj", "up_proj", 1), ] - params_dict = dict(self.named_parameters()) + params_dict = dict(vllm_model.named_parameters()) loaded_params = set() - for name, loaded_weight in weights: + for name, loaded_weight in actor_weights.items(): for (param_name, shard_name, shard_id) in stacked_params_mapping: if shard_name not in name: continue name = name.replace(shard_name, param_name) # Skip loading extra bias for GPTQ models. if name.endswith(".bias") and name not in params_dict: continue + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight) param = params_dict[name] weight_loader = param.weight_loader - weight_loader(param, loaded_weight, shard_id) + weight_loader(param, local_loaded_weight.to(dtype=param.dtype), shard_id) break else: # lm_head is not used in vllm as it is tied with embed_token. # To prevent errors, skip loading lm_head.weight. if "lm_head.weight" in name: continue # Skip loading extra bias for GPTQ models. if name.endswith(".bias") and name not in params_dict: continue + local_loaded_weight = redistribute_dtensor(param_name=name, loaded_weights=loaded_weight) param = params_dict[name] weight_loader = getattr(param, "weight_loader", default_weight_loader) - weight_loader(param, loaded_weight) + weight_loader(param, local_loaded_weight.to(dtype=param.dtype)) loaded_params.add(name) unloaded_params = params_dict.keys() - loaded_params if unloaded_params: raise RuntimeError( "Some weights are not initialized from checkpoints: " f"{unloaded_params}")
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/advance/fsdp_extension.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/advance/fsdp_extension.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 4386 }
Add models to Megatron-LM backend =================================== Model ----------- The most challenging aspect to use Megatron-LM backend is implementing the models for training. Currently, we implement Llama model that support data parallelism, tensor parallelism, pipeline parallelism (also vPP) and sequence parallelism. We also implement remove padding on Llama model, which can be found in `modeling_llama_megatron.py <https://github.com/volcengine/verl/blob/main/verl/models/llama/megatron/modeling_llama_megatron.py>`_. To support other model, users are required to implement: 1. Implemnt a model similar to ``modeling_llama_megatron.py`` that satisfy the parallelism requirements of Megatron-LM. Then register your model in the `registry.py <https://github.com/volcengine/verl/blob/main/verl/models/registry.py>`_. 2. Checkpoint utils that can load full checkpoint (e.g. huggingface checkpoint) to partitioned models during the runtime. Then register your loader to ``weight_loader_registry`` in `weight_loader_registry.py <https://github.com/volcengine/verl/blob/main/verl/models/weight_loader_registry.py>`_. 3. Weight loader that synchronize the weight from Megatron to rollout (vLLM) model. Note that both the actor model and rollout model are partitioned during runtime. So, it's advisable to map the model name in actor model implementation. Otherwise, you may need an additional name mapping and even weight transformation.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/advance/megatron_extension.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/advance/megatron_extension.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 1471 }
Ray API Design Tutorial ======================================= We provide a tutorial for our Ray API design, including: - Ray basic concepts - Resource Pool and RayWorkerGroup - Data Dispatch, Execution and Collection - Initialize the RayWorkerGroup and execute the distributed computation in the given Resource Pool See details in `tutorial.ipynb <https://github.com/volcengine/verl/blob/main/examples/ray/tutorial.ipynb>`_.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/advance/placement.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/advance/placement.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 429 }
.. _config-explain-page: Config Explaination =================== ppo_trainer.yaml for FSDP Backend --------------------------------- Data ~~~~ .. code:: yaml data: tokenizer: null train_files: ~/data/rlhf/gsm8k/train.parquet val_files: ~/data/rlhf/gsm8k/test.parquet prompt_key: prompt max_prompt_length: 512 max_response_length: 512 train_batch_size: 1024 val_batch_size: 1312 return_raw_input_ids: False # This should be set to true when the tokenizer between policy and rm differs return_raw_chat: False - ``data.train_files``: Training set parquet. Can be a list or a single file. The program will read all files into memory, so it can't be too large (< 100GB). The path can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path. - ``data.val_files``: Validation parquet. Can be a list or a single file. - ``data.prompt_key``: The field in the dataset where the prompt is located. Default is 'prompt'. - ``data.max_prompt_length``: Maximum prompt length. All prompts will be left-padded to this length. An error will be reported if the length is too long - ``data.max_response_length``: Maximum response length. Rollout in RL algorithms (e.g. PPO) generates up to this length - ``data.train_batch_size``: Batch size sampled for one training iteration of different RL algorithms. - ``data.val_batch_size``: Batch size sampled for one validation iteration. - ``data.return_raw_input_ids``: Whether to return the original input_ids without adding chat template. This is mainly used to accommodate situations where the reward model's chat template differs from the policy. It needs to be decoded first, then apply the RM's chat template. If using a model-based RM, and the policy and RM chat_templates are different, this flag needs to be set - ``data.return_raw_chat``: - ``data.truncation``: Truncate the input_ids or prompt length if they exceed max_prompt_length. Default is 'error', not allow exceed the max_prompt_length. The users should increase the max_prompt_length if throwing the error. Actor/Rollout/Reference Policy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code:: yaml actor_rollout_ref: hybrid_engine: True model: path: ~/models/deepseek-llm-7b-chat external_lib: null override_config: {} enable_gradient_checkpointing: False actor: strategy: fsdp # This is for backward-compatibility ppo_mini_batch_size: 256 ppo_micro_batch_size: 64 grad_clip: 1.0 clip_ratio: 0.2 entropy_coeff: 0.001 ppo_epochs: 1 shuffle: True optim: lr: 1e-6 lr_warmup_steps_ratio: 0. # the total steps will be injected during runtime min_lr_ratio: null # only useful for warmup with cosine warmup_style: constant # select from constant/cosine total_training_steps: -1 # must be override by program fsdp_config: wrap_policy: # transformer_layer_cls_to_wrap: None min_num_params: 0 param_offload: False grad_offload: False optimizer_offload: False ref: fsdp_config: param_offload: False wrap_policy: # transformer_layer_cls_to_wrap: None min_num_params: 0 log_prob_micro_batch_size: 128 rollout: name: vllm temperature: 1.0 top_k: -1 # 0 for hf rollout, -1 for vllm rollout top_p: 1 response_length: ${data.max_response_length} # for vllm rollout dtype: bfloat16 # should align with FSDP gpu_memory_utilization: 0.5 ignore_eos: False enforce_eager: True free_cache_engine: True load_format: dummy_dtensor # or dummy_hf or dummy_megatron tensor_model_parallel_size: 2 max_num_batched_tokens: 8192 max_num_seqs: 1024 log_prob_micro_batch_size: 128 # for vllm and hf rollout do_sample: True **Common config for actor, rollout and reference model** - ``actor_rollout_ref.hybrid_engine``: Whether it's a hybrid engine, currently only supports hybrid engine - ``actor_rollout_ref.model.path``: Huggingface model path. This can be either local path or HDFS path. For HDFS path, we provide utils to download it to DRAM and convert the HDFS path to local path. - ``actor_rollout_ref.model.external_libs``: Additional Python packages that need to be imported. Used to register models or tokenizers into the Huggingface system. - ``actor_rollout_ref.model.override_config``: Used to override some of the model's original configurations, mainly dropout - ``actor_rollout_ref.model.enable_gradient_checkpointing``: Whether to enable gradient checkpointing for the actor **Actor model** - ``actor_rollout_ref.actor.strategy``: fsdp or megatron. In this example, we use fsdp backend. - ``actor_rollout_ref.actor.ppo_mini_batch_size``: One sample is split into multiple sub-batches with batch_size=ppo_mini_batch_size for PPO updates - ``actor_rollout_ref.actor.ppo_micro_batch_size``: Similar to gradient accumulation, the micro_batch_size for one forward pass, trading speed for GPU memory - ``actor_rollout_ref.actor.grad_clip``: Gradient clipping for actor updates - ``actor_rollout_ref.actor.clip_ratio``: PPO clip ratio - ``actor_rollout_ref.actor.entropy_coeff``: The weight of entropy when calculating PPO loss - ``actor_rollout_ref.actor.ppo_epochs``: Number of epochs for PPO updates on one set of sampled data - ``actor_rollout_ref.actor.shuffle``: Whether to shuffle data when there are multiple epochs - ``actor_rollout_ref.actor.optim``: Actor's optimizer parameters - ``actor_rollout_ref.actor.fsdp_config``: FSDP config for actor training - ``wrap_policy``: FSDP wrap policy. By default, it uses Huggingface's wrap policy, i.e., wrapping by DecoderLayer - No need to set transformer_layer_cls_to_wrap, so we comment it. - ``*_offload``: Whether to enable parameter, gradient and optimizer offload - Trading speed for GPU memory. **Reference Model** - ``actor_rollout_ref.ref``: FSDP config same as actor. **For models larger than 7B, it's recommended to turn on offload for ref by default** - ``actor_rollout_ref.ref.log_prob_micro_batch_size``: The batch size for one forward pass in the computation of ``ref_log_prob``. **Rollout Model** - ``actor_rollout_ref.rollout.name``: hf/vllm. We use vLLM by default because it's much efficient and our hybrid engine is implemented with vLLM. - Rollout (Auto-regressive) parameters. The key should be equal to the property name in vLLM's ``SamplingParams``. - ``temperature``, ``top_k``, ``top_p`` and others: Sampling parameters in ``SamplingParams``. - ``dtype``: Rollout model parameters type. This should be align with the actor model parameter type in FSDP/Megatron backend. - ``gpu_memory_utilization``: The proportion of the remaining GPU memory allocated for kv cache after other models have initialized when using vllm. - ``tensor_model_parallel_size``: TP size for rollout. Only effective for vllm. - ``log_prob_micro_batch_size``: Micro_batch_size (The batch size for one forward pass) for recalculating log_prob. - ``do_sample``: Whether to sample. If set to False, the rollout model will perform greedy sampling. We disable ``do_sample`` during validation. - ``actor_rollout_ref.rollout.ignore_eos``: Whether to ignore the EOS token and continue generating tokens after the EOS token is generated. - ``actor_rollout_ref.rollout.free_cache_engine``: Offload the KVCache after rollout generation stage. Default is True. When set to True, we need to disable the usage of CUDAGraph (set ``enforce_eager`` to True.) - ``actor_rollout_ref.rollout.enforce_eager``: Whether to use CUDAGraph in vLLM generation. Default set to True to disable CUDAGraph. - ``actor_rollout_ref.rollout.load_format``: Which weight loader to use to load the actor model weights to the rollout model. - ``auto``: Use Megatron weight loader. - ``megatron``: Use Megatron weight loader. Deployed with Megatron backend. The input model ``state_dict()`` is already partitioned along TP dimension and already gathered along PP dimension. This weight loader requires that the Rollout model and Actor model's parameters shape and name should be identical. - ``dtensor``: Default solution when using Huggingface weight loader. Deployed with FSDP backend and the state_dict_type is ``StateDictType.SHARDED_STATE_DICT``. Recommend to use this weight loader - ``hf``: Use Huggingface weight loader. Deployed with FSDP backend and the state_dict_type is ``StateDictType.FULL_STATE_DICT``. This solution doesn't need to rewrite the weight loader for each model implemented in vLLM but it results in larger peak memory usage. - ``dummy_hf``, ``dummy_megatron``, ``dummy_dtensor``: Random initialization. .. note:: **NOTED**: In this config field, users only need to select from ``dummy_megatron``, ``dummy_dtensor``, ``dummy_hf`` for rollout initialization and our hybrid engine will select the corresponding weight loader (i.e., ``megatron``, ``dtensor``, ``hf``) during actor/rollout weight synchronization. Critic Model ~~~~~~~~~~~~ Most parameters for Critic are similar to Actor Model. Reward Model ~~~~~~~~~~~~ .. code:: yaml reward_model: enable: False model: input_tokenizer: ${actor_rollout_ref.model.path} # set this to null if the chat template is identical path: ~/models/Anomy-RM-v0.1 external_lib: ${actor_rollout_ref.model.external_lib} fsdp_config: min_num_params: 0 param_offload: False micro_batch_size: 64 max_length: null - ``reward_model.enable``: Whether to enable reward model. If False, we compute the reward only with the user-defined reward functions. In GSM8K and Math examples, we disable reward model. For RLHF alignment example using full_hh_rlhf, we utilize reward model to assess the responses. If False, the following parameters are not effective. - ``reward_model.model`` - ``input_tokenizer``: Input tokenizer. If the reward model's chat template is inconsistent with the policy, we need to first decode to plaintext, then apply the rm's chat_template. Then score with RM. If chat_templates are consistent, it can be set to null. - ``path``: RM's HDFS path or local path. Note that RM only supports AutoModelForSequenceClassification. Other model types need to define their own RewardModelWorker and pass it from the code. Algorithm ~~~~~~~~~ .. code:: yaml algorithm: gamma: 1.0 lam: 1.0 adv_estimator: gae kl_penalty: kl # how to estimate kl divergence kl_ctrl: type: fixed kl_coef: 0.005 - ``gemma``: discount factor - ``lam``: Trade-off between bias and variance in the GAE estimator - ``adv_estimator``: gae. Currently only supports gae, will support GRPO in the future - ``kl_penalty``\ :Support ``kl``, ``abs``, ``mse`` and ``full``.How to calculate the kl divergence between actor and reference policy. For specific options, refer to `core_algos.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/core_algos.py#L192>`_ . Trainer ~~~~~~~ .. code:: yaml trainer: total_epochs: 30 project_name: verl_examples experiment_name: gsm8k logger: ['console', 'wandb'] nnodes: 1 n_gpus_per_node: 8 save_freq: -1 test_freq: 2 critic_warmup: 0 default_hdfs_dir: ~/experiments/gsm8k/ppo/${trainer.experiment_name} # hdfs checkpoint path default_local_dir: checkpoints/${trainer.project_name}/${trainer.experiment_name} # local checkpoint path - ``trainer.total_epochs``: Number of epochs in training. - ``trainer.project_name``: For wandb - ``trainer.experiment_name``: For wandb - ``trainer.logger``: Support console and wandb - ``trainer.nnodes``: Number of nodes used in the training. - ``trainer.n_gpus_per_node``: Number of GPUs per node. - ``trainer.save_freq``: The frequency (by iteration) to save checkpoint of the actor and critic model. - ``trainer.test_freq``: The validation frequency (by iteration). - ``trainer.critic_warmup``: The number of iteration to train the critic model before actual policy learning.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/examples/config.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/examples/config.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 12464 }
GSM8K Example ============= Introduction ------------ In this example, we train an LLM to tackle the GSM8k task. Paper: https://arxiv.org/pdf/2110.14168 Dataset: https://huggingface.co/datasets/gsm8k Note that the original paper mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RLHF agent using a rule-based reward model. Dataset Introduction -------------------- GSM8k is a math problem dataset. The prompt is an elementary school problem. The LLM model is required to answer the math problem. The training set contains 7473 samples and the test set contains 1319 samples. **An example** Prompt Katy makes coffee using teaspoons of sugar and cups of water in the ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups of water, calculate the number of teaspoonfuls of sugar she used. Solution The total ratio representing the ingredients she used to make the coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20\ *120 = <<7/20*\ 120=42>>42 #### 42 Step 1: Prepare dataset ----------------------- .. code:: bash cd examples/data_preprocess python3 gsm8k.py --local_dir ~/data/gsm8k Step 2: Download Model ---------------------- There're three ways to prepare the model checkpoints for post-training: - Download the required models from hugging face .. code:: bash huggingface-cli download deepseek-ai/deepseek-math-7b-instruct --local-dir ~/models/deepseek-math-7b-instruct --local-dir-use-symlinks False - Already store your store model in the local directory or HDFS path. - Also, you can directly use the model name in huggingface (e.g., deepseek-ai/deepseek-math-7b-instruct) in ``actor_rollout_ref.model.path`` and ``critic.model.path`` field in the run script. Noted that users should prepare checkpoints for actor, critic and reward model. [Optional] Step 3: SFT your Model --------------------------------- We provide a SFT Trainer using PyTorch FSDP in `fsdp_sft_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/fsdp_sft_trainer.py>`_. Users can customize their own SFT script using our FSDP SFT Trainer. We also provide various training scripts for SFT on GSM8K dataset in `gsm8k sft directory <https://github.com/volcengine/verl/blob/main/examples/gsm8k/sft/>`_. .. code:: shell set -x torchrun -m verl.trainer.fsdp_sft_trainer \ data.train_files=$HOME/data/gsm8k/train.parquet \ data.val_files=$HOME/data/gsm8k/test.parquet \ data.prompt_key=question \ data.response_key=answer \ data.micro_batch_size=8 \ model.partial_pretrain=deepseek-ai/deepseek-coder-6.7b-instruct \ trainer.default_hdfs_dir=hdfs://user/verl/experiments/gsm8k/deepseek-coder-6.7b-instruct/ \ trainer.project_name=gsm8k-sft \ trainer.experiment_name=gsm8k-sft-deepseek-coder-6.7b-instruct \ trainer.total_epochs=4 \ trainer.logger=['console','wandb'] Step 4: Perform PPO training with your model on GSM8K Dataset ------------------------------------------------------------- - Prepare your own run.sh script. Here's an example for GSM8k dataset and deepseek-llm-7b-chat model. - Users could replace the ``data.train_files`` ,\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on their environment. - See :doc:`config` for detailed explaination of each config field. **Reward Model/Function** We use a rule-based reward model. We force the model to produce a final answer following 4 “#” as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We compare them and assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer. **Training Script** The training script example for FSDP and Megatron-LM backend are stored in examples/ppo_trainer directory. .. code:: bash cd ../ppo_trainer bash run_deepseek7b_llm.sh The script of run_deepseek7b_llm.sh .. code:: bash set -x python3 -m verl.trainer.main_ppo \ data.train_files=~/data/rlhf/gsm8k/train.parquet \ data.val_files=~/data/rlhf/gsm8k/test.parquet \ data.train_batch_size=1024 \ data.val_batch_size=1312 \ data.max_prompt_length=512 \ data.max_response_length=512 \ actor_rollout_ref.model.path=~/models/deepseek-llm-7b-chat \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.actor.ppo_mini_batch_size=256 \ actor_rollout_ref.actor.ppo_micro_batch_size=64 \ actor_rollout_ref.actor.fsdp_config.param_offload=False \ actor_rollout_ref.actor.fsdp_config.grad_offload=False \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \ actor_rollout_ref.rollout.micro_batch_size=256 \ actor_rollout_ref.rollout.log_prob_micro_batch_size=128 \ actor_rollout_ref.rollout.tensor_model_parallel_size=2 \ actor_rollout_ref.rollout.name=vllm \ actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \ actor_rollout_ref.ref.log_prob_micro_batch_size=128 \ actor_rollout_ref.ref.fsdp_config.param_offload=True \ critic.optim.lr=1e-5 \ critic.model.path=~/models/deepseek-llm-7b-chat \ critic.model.enable_gradient_checkpointing=False \ critic.ppo_micro_batch_size=64 \ critic.model.fsdp_config.param_offload=False \ critic.model.fsdp_config.grad_offload=False \ critic.model.fsdp_config.optimizer_offload=False \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.critic_warmup=0 \ trainer.logger=['console','wandb'] \ trainer.project_name='verl_example_gsm8k' \ trainer.experiment_name='deepseek_llm_7b_function_rm' \ trainer.n_gpus_per_node=8 \ trainer.nnodes=1 \ trainer.save_freq=-1 \ trainer.total_epochs=15
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/examples/gsm8k_example.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/examples/gsm8k_example.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 5986 }
PPO Example Architecture ======================== Let's start with the Proximal Policy Optimization algorithm, which is most widely used algorithm in LLM post-training. The main entry point of the PPO algorithm example is: `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py>`_. In this tutorial, we will go through the code architecture in `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py>`_. Define the data --------------- Users need to preprocess and store the dataset in parquet files. And we implement `RLHFDataset` to load and tokenize the parquet files. For ``RLHFDataset`` (Default), at least 1 fields are required: - ``prompt``: Contains the string prompt We already provide some examples of processing the datasets to parquet files in `data_preprocess directory <https://github.com/volcengine/verl/blob/main/examples/data_preprocess>`_. Currently, we support preprocess of GSM8k, MATH, Hellasage, Full_hh_rlhf datasets. See :doc:`../preparation/prepare_data` for more information. Define the reward functions for different datasets -------------------------------------------------- In this main entry point, the users only need to define their own reward function based on the datasets (or applications) utilized in PPO training. For example, we already provide reward functions for `GSM8k <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/gsm8k.py>`_ and `MATH <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py>`_ datasets in the ``_select_rm_score_fn``. In the ``RewardManager``, we will compute the reward score based on the data_source to select corresponding reward functions. For some RLHF datasets (e.g., full_hh_rlhf), the reward model is utilized to assess the responses without any reward functions. In this case, the ``RewardManager`` will return the ``rm_score`` computed by the reward model directly. See `reward functions <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_ for detailed implementation. Define worker classes --------------------- .. code:: python if config.actor_rollout_ref.actor.strategy == 'fsdp': # for FSDP backend assert config.actor_rollout_ref.actor.strategy == config.critic.strategy from verl.workers.fsdp_workers import ActorRolloutRefWorker, CriticWorker from verl.single_controller.ray import RayWorkerGroup ray_worker_group_cls = RayWorkerGroup elif config.actor_rollout_ref.actor.strategy == 'megatron': # for Megatron backend assert config.actor_rollout_ref.actor.strategy == config.critic.strategy from verl.workers.megatron_workers import ActorRolloutRefWorker, CriticWorker from verl.single_controller.ray.megatron import NVMegatronRayWorkerGroup ray_worker_group_cls = NVMegatronRayWorkerGroup # Ray worker class for Megatron-LM else: raise NotImplementedError from verl.trainer.ppo.ray_trainer import ResourcePoolManager, Role role_worker_mapping = { Role.ActorRollout: ActorRolloutRefWorker, Role.Critic: CriticWorker, Role.RefPolicy: ActorRolloutRefWorker } global_pool_id = 'global_pool' resource_pool_spec = { global_pool_id: [config.trainer.n_gpus_per_node] * config.trainer.nnodes, } mapping = { Role.ActorRollout: global_pool_id, Role.Critic: global_pool_id, Role.RefPolicy: global_pool_id, } Step 1: Construct the mapping between roles and workers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A role represents a group of workers in the same process. We have pre-defined several roles in `ray_trainer.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/ray_trainer.py#L38>`_. .. code:: python class Role(Enum): """ To create more roles dynamically, you can subclass Role and add new members """ Actor = 0 # This worker only has Actor Rollout = 1 # This worker only has Rollout ActorRollout = 2 # This worker has both actor and rollout, it's a HybridEngine Critic = 3 # This worker only has critic RefPolicy = 4 # This worker only has reference policy RewardModel = 5 # This worker only has reward model ActorRolloutRef = 6 # This worker contains actor, rollout and reference policy simultaneously Step 2: Define the worker class corresponding to this role ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - We have pre-implemented the ``ActorRolloutRefWorker``. Through different configs, it can be a standalone actor, a standalone rollout, an ActorRollout HybridEngine, or an ActorRolloutRef HybridEngine - We also pre-implemented workers for ``Actor``, ``Rollout``, ``Critic``, ``Reward Model`` and ``Reference model`` on two different backend: PyTorch FSDP and Megatron-LM. See `FSDP Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py>`_ and `Megatron-LM Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py>`_ for more information. Step 3: Define resource pool id and resource pool spec ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Resource pool is a division of global GPU resources, ``resource_pool_spec`` is a dict, mapping from id to # of GPUs - In the above example, we defined a global resource pool: global_pool_id, and then put all roles on this one resource pool with all the GPUs in this post-training task. This refers to *co-locate* placement where all the models share the same set of GPUs. - See resource pool and placement for advance usage. Defining reward model/function ------------------------------ .. code:: python # we should adopt a multi-source reward function here # - for rule-based rm, we directly call a reward score # - for model-based rm, we call a model # - for code related prompt, we send to a sandbox if there are test cases # - finally, we combine all the rewards together # - The reward type depends on the tag of the data if config.reward_model.enable: from verl.workers.fsdp_workers import RewardModelWorker role_worker_mapping[Role.RewardModel] = RewardModelWorker mapping[Role.RewardModel] = global_pool_id reward_fn = RewardManager(tokenizer=tokenizer, num_examine=0) # Note that we always use function-based RM for validation val_reward_fn = RewardManager(tokenizer=tokenizer, num_examine=1) resource_pool_manager = ResourcePoolManager(resource_pool_spec=resource_pool_spec, mapping=mapping) Since not all tasks use model-based RM, users need to define here whether it's a model-based RM or a function-based RM - If it's a model-based RM, directly add the ``RewardModel`` role in the resource mapping and add it to the resource pool mapping. - Note that the pre-defined ``RewardModelWorker`` only supports models with the structure of huggingface ``AutoModelForSequenceClassification``. If it's not this model, you need to define your own RewardModelWorker in `FSDP Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py>`_ and `Megatron-LM Workers <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py>`_. - If it's a function-based RM, the users are required to classified the reward function for each datasets. .. code:: python def _select_rm_score_fn(data_source): if data_source == 'openai/gsm8k': return gsm8k.compute_score elif data_source == 'lighteval/MATH': return math.compute_score else: raise NotImplementedError See reward functions implemented in `directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/>`_ for more information. Define, init and run the PPO Trainer ------------------------------------ .. code:: python trainer = RayPPOTrainer(config=config, tokenizer=tokenizer, role_worker_mapping=role_worker_mapping, resource_pool_manager=resource_pool_manager, ray_worker_group_cls=ray_worker_group_cls, reward_fn=reward_fn, val_reward_fn=val_reward_fn) trainer.init_workers() trainer.fit() - We first initialize the ``RayPPOTrainer`` with user config, tokenizer and all the above worker mapping, resource pool, worker group and reward functions - We first call the ``trainer.init_workers()`` to initialize the models on the allocated GPUs (in the resource pool) - The actual PPO training will be executed in ``trainer.fit()`` veRL can be easily extended to other RL algorithms by reusing the Ray model workers, resource pool and reward functions. See :doc:`extension<../advance/dpo_extension>` for more information. Details of the ``RayPPOTrainer`` is discussed in :doc:`Ray Trainer<../workers/ray_trainer>`.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/examples/ppo_code_architecture.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/examples/ppo_code_architecture.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 9044 }
.. _algo-baseline-page: Algorithm Baselines =================== GSM8k ------------------ Assuming GSM8k dataset is preprocess via ``python3 examples/data_preprocess/gsm8k.py`` Refer to the table below to reproduce PPO training from different pre-trained models. .. _Huggingface: https://huggingface.co/google/gemma-2-2b-it#benchmark-results .. _SFT Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-sft-0.411.log .. _SFT+PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/gemma-2-2b-it-ppo-bsz512_4-prompt1024-resp-512-0.640.log .. _wandb: https://api.wandb.ai/links/verl-team/h7ux8602 .. _Qwen Blog: https://qwenlm.github.io/blog/qwen2.5-llm/ .. _PPO Command and logs: https://github.com/eric-haibin-lin/verl-data/blob/experiments/gsm8k/Qwen2.5-0.5B-bsz256_2-prompt1024-resp512-0.567.log +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Model | Method | Test score | Details | +============================+========================+============+=====================+=========================================================================+ | google/gemma-2-2b-it | pretrained checkpoint | 23.9 | `Huggingface`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | google/gemma-2-2b-it | SFT | 52.06 | `SFT Command and logs`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | google/gemma-2-2b-it | SFT + PPO | 64.02 | `SFT+PPO Command and logs`_, `wandb`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Qwen/Qwen2.5-0.5B-Instruct | pretrained checkpoint | 36.4 | `Qwen Blog`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+ | Qwen/Qwen2.5-0.5B-Instruct | PPO | 56.7 | `PPO Command and logs`_ | +----------------------------+------------------------+------------+-----------------------------------------------------------------------------------------------+
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/experiment/ppo.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/experiment/ppo.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 3029 }
Prepare Data (Parquet) for Post-Training ======================================== Before starting the post-training job, we need to prepare the data for the policy training. The data should be stored in the parquet format. We provide several data preprocess scripts for different datasets, including GSM8K, MATH, HelloSwag, Full_hh_rlhf. To prepare other datasets, we need to follow the following steps: The data preprocess script can be divided into two parts: 1. The first part is the common part, which loads the dataset from huggingface's ``datasets`` package. Then preprocess the datasets with the ``make_map_fn`` and then store in the parquet format. .. code:: python import re import os import datasets from verl.utils.hdfs_io import copy, makedirs import argparse # To extract the solution for each prompts in the dataset # def extract_solution(solution_str): # ... if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--local_dir', default='/opt/tiger/gsm8k') parser.add_argument('--hdfs_dir', default=None) args = parser.parse_args() num_few_shot = 5 data_source = 'openai/gsm8k' dataset = datasets.load_dataset(data_source, 'main') train_dataset = dataset['train'] test_dataset = dataset['test'] # Construct a `def make_map_fn(split)` for the corresponding datasets. # ... train_dataset = train_dataset.map(function=make_map_fn('train'), with_indices=True) test_dataset = test_dataset.map(function=make_map_fn('test'), with_indices=True) local_dir = args.local_dir hdfs_dir = args.hdfs_dir train_dataset.to_parquet(os.path.join(local_dir, 'train.parquet')) test_dataset.to_parquet(os.path.join(local_dir, 'test.parquet')) makedirs(hdfs_dir) copy(src=local_dir, dst=hdfs_dir) 2. The users are required to implement the ``make_map_fn()`` function (as well as the ``extract_solution``) on their own to support different datasets or tasks. We already implemented the data preprocess of GSM8k, MATH, Hellaswag and Full_hh_rlhf datasets. And we take the GSM8k dataset as an example: **GSM8K** In the ``make_map_fn``, each data field should consist of the following 5 fields: 1. ``data_source``: The name of the dataset. To index the corresponding reward function in the ``RewardModule`` 2. ``prompt``: This field should be constructed in the format of huggingface chat_template. The tokenizer in ``RLHFDataset`` will apply chat template and tokenize the prompt. 3. ``ability``: Define the task category. 4. ``reward_model``: Currently, we only utilize the ``ground_truth`` field during evaluation. The ``ground_truth`` is computed by the ``extract_solution`` function. **NOTED** that the implementation of the corresponding reward function should align with this extracted ``ground_truth``. 5. ``extra_info``: Record some information of the current prompt. Not use for now. .. code:: python def extract_solution(solution_str): solution = re.search("#### (\\-?[0-9\\.\\,]+)", solution_str) # extract the solution after #### assert solution is not None final_solution = solution.group(0) final_solution = final_solution.split('#### ')[1].replace(',', '') return final_solution instruction_following = "Let's think step by step and output the final answer after \"####\"." # add a row to each data item that represents a unique id def make_map_fn(split): def process_fn(example, idx): question = example.pop('question') question = question + ' ' + instruction_following answer = example.pop('answer') solution = extract_solution(answer) data = { "data_source": data_source, "prompt": [{ "role": "user", "content": question }], "ability": "math", "reward_model": { "style": "rule", "ground_truth": solution }, "extra_info": { 'split': split, 'index': idx } } return data return process_fn
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/preparation/prepare_data.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/preparation/prepare_data.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 4335 }
Implement Reward Function for Dataset ====================================== For each dataset, we need to implement a reward function or utilize a reward model to compute the rewards for the generated responses. We already pre-implemented some reward functions in `reward_score directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_. Currently, we support reward functions for GSM8k and MATH datasets. For RLHF datasets (e.g., full_hh_rlhf) and Code Generation (e.g., APPS), we utilize reward model and SandBox (will opensource soon) for evaluation respectively. RewardManager ------------- In the entrypoint of the PPO Post-Training script `main_ppo.py <https://github.com/volcengine/verl/blob/main/verl/trainer/main_ppo.py#L33>`_, we implement a ``RewardManager`` that utilze pre-implemented reward functions to compute the scores for each response. In the ``RewardManager``, we implemented a ``__call__`` function to compute the score for each response. All the reward functions are executed by ``compute_score_fn``. The input is a ``DataProto``, which includes: - ``input_ids``, ``attention_mask``: ``input_ids`` and ``attention_mask`` after applying chat_template, including prompt and response - ``responses``: response tokens - ``ground_truth``: The ground truth string of the current prompt. Stored in ``non_tensor_batch`` in the ``DataProto``, which should be preprocessed in the parquet files. - ``data_source``: The dataset name of the current prompt. Stored in ``non_tensor_batch`` in the ``DataProto``, which should be preprocessed in the parquet files. After detokenize the responses, the responses string and the ground truth string will be input to the ``compute_score_fn`` to compute the score for each response. Reward Functions ---------------- We already pre-implemented some reward functions in `reward_score directory <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score>`_. - In the `GSM8k example <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/gsm8k.py>`_, we force the response to output the final answer after four ####, then use string matching to compare with the ground truth. If completely correct, score 1 point; if the format is correct, score 0.1 points; if the format is incorrect, score 0 points. - In the `MATH example <https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py>`_, we follow the implementation in `lm-evaluation-harness repository <https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/hendrycks_math/utils.py>`_.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/preparation/reward_function.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/preparation/reward_function.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 2605 }
Installation ============ To install the veRL, we recommend using conda: .. code:: bash conda create -n verl python==3.9 conda activate verl For installing the latest version of veRL, the best way is to clone and install it from source. Then you can modify our code to customize your own post-training jobs. .. code:: bash # install verl together with some lightweight dependencies in setup.py git clone https://github.com/volcengine/verl.git cd verl pip3 install -e . You can also install veRL using ``pip3 install`` .. code:: bash # directly install from pypi pip3 install verl Dependencies ------------ veRL requires Python >= 3.9 and CUDA >= 12.1. veRL support various backend, we currently release FSDP and Megatron-LM for actor training and vLLM for rollout generation. The following dependencies are required for all backends, PyTorch FSDP and Megatron-LM. The pros, cons and extension guide for using PyTorch FSDP backend can be found in :doc:`FSDP Workers<../workers/fsdp_workers>`. .. code:: bash # install torch [or you can skip this step and let vllm to install the correct version for you] pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu121 # install vllm pip3 install vllm==0.6.3 # or you can install 0.5.4, 0.4.2 and 0.3.1 pip3 install ray # flash attention 2 pip3 install flash-attn --no-build-isolation For users who pursue better scalability, we recommend using Megatron-LM backend. Please install the above dependencies first. Currently, we support Megatron-LM\@core_v0.4.0 and we fix some internal issues of Megatron-LM. Here's the additional installation guide. The pros, cons and extension guide for using Megatron-LM backend can be found in :doc:`Megatron-LM Workers<../workers/megatron_workers>`. .. code:: bash # FOR Megatron-LM Backend # apex pip3 install -v --disable-pip-version-check --no-cache-dir --no-build-isolation \ --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" \ git+https://github.com/NVIDIA/apex # transformer engine pip3 install git+https://github.com/NVIDIA/[email protected] # megatron core v0.4.0 cd .. git clone -b core_v0.4.0 https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM cp ../verl/patches/megatron_v4.patch . git apply megatron_v4.patch pip3 install -e . export PYTHONPATH=$PYTHONPATH:$(pwd)
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/start/install.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/start/install.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 2498 }
.. _quickstart: ========================================================= Quickstart: Post-train a LLM using PPO with GSM8K dataset ========================================================= Post-train a LLM using GSM8K dataset =================================================================== Introduction ------------ .. _hf_dataset_gsm8k: https://huggingface.co/datasets/gsm8k In this example, we train an LLM to tackle the `GSM8k <hf_dataset_gsm8k>`_ task with function-based rewards. [1]_ Prerequisite: - the latest version of ``verl`` and its dependencies installed following the installation guide. Using the docker image is recommended. - an GPU with at least 24 GB HBM Dataset Introduction -------------------- GSM8k is a math problem dataset. The prompt is an elementary school problem. The LLM model is asked to solve the math problem. Below is an example: Prompt Katy makes coffee using teaspoons of sugar and cups of water in the ratio of 7:13. If she used a total of 120 teaspoons of sugar and cups of water, calculate the number of teaspoonfuls of sugar she used. Solution The total ratio representing the ingredients she used to make the coffee is 7+13 = <<7+13=20>>20 Since the fraction representing the number of teaspoons she used is 7/20, she used 7/20\ *120 = <<7/20*\ 120=42>>42 #### 42 Step 1: Prepare the dataset ---------------------------- We preprocess the dataset in parquet format so that (1) it contains necessary fields for computing RL rewards and (2) is faster to read. .. code-block:: bash python3 examples/data_preprocess/gsm8k.py --local_dir ~/data/gsm8k Step 2: Download a model for post-training ------------------------------------------- Usually we recommend starting with an "instruct" model variant so that the model follows instructions. In this example, we start with the ``Qwen2.5-0.5B-Instruct`` model. If you start from a "base" model variant, doing SFT before RL is recommended. Refer to the `sft directory <https://github.com/volcengine/verl/blob/main/examples/gsm8k/sft/>`_ and `SFT Trainer <https://github.com/volcengine/verl/blob/main/verl/trainer/fsdp_sft_trainer.py>`_ for further details. .. code-block:: bash python3 -c "import transformers; transformers.pipeline('text-generation', model='Qwen/Qwen2.5-0.5B-Instruct')" Step 3: Perform PPO training with the instruct model ---------------------------------------------------------------------- **Reward Model/Function** We use a pre-defined rule-based reward model. We force the model to produce a final answer following 4 “#” as shown in the solution. We extract the final answer from both the solution and model's output using regular expression matching. We assign a reward of 1 to correct answer, 0.1 to incorrect answer and 0 to no answer. For mode details, please refer to `verl/utils/reward_score/gsm8k.py <https://github.com/volcengine/verl/blob/v0.1/verl/utils/reward_score/gsm8k.py>`_. **Training Script** Now let's run PPO training with the dataset and model above. [2]_ Set the ``data.train_files`` ,\ ``data.val_files``, ``actor_rollout_ref.model.path`` and ``critic.model.path`` based on your dataset and model names or paths. .. code-block:: bash PYTHONUNBUFFERED=1 python3 -m verl.trainer.main_ppo \ data.train_files=$HOME/data/gsm8k/train.parquet \ data.val_files=$HOME/data/gsm8k/test.parquet \ data.train_batch_size=256 \ data.val_batch_size=1312 \ data.max_prompt_length=512 \ data.max_response_length=256 \ actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct \ actor_rollout_ref.actor.optim.lr=1e-6 \ actor_rollout_ref.actor.ppo_mini_batch_size=64 \ actor_rollout_ref.actor.ppo_micro_batch_size=4 \ actor_rollout_ref.rollout.log_prob_micro_batch_size=8 \ actor_rollout_ref.rollout.tensor_model_parallel_size=1 \ actor_rollout_ref.rollout.gpu_memory_utilization=0.4 \ actor_rollout_ref.ref.log_prob_micro_batch_size=4 \ critic.optim.lr=1e-5 \ critic.model.path=Qwen/Qwen2.5-0.5B-Instruct \ critic.ppo_micro_batch_size=4 \ algorithm.kl_ctrl.kl_coef=0.001 \ trainer.logger=['console'] \ +trainer.val_before_train=False \ trainer.default_hdfs_dir=null \ trainer.n_gpus_per_node=1 \ trainer.nnodes=1 \ trainer.save_freq=10 \ trainer.test_freq=10 \ trainer.total_epochs=15 2>&1 | tee verl_demo.log You are expected to see the following logs, indicating training in progress. The key metric ``val/test_score/openai/gsm8k`` is computed every ``trainer.test_freq`` steps: .. code-block:: bash step:0 - timing/gen:21.470 - timing/ref:4.360 - timing/values:5.800 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.109 - timing/update_critic:15.664 - critic/vf_loss:14.947 - critic/vf_clipfrac:0.000 - critic/vpred_mean:-2.056 - critic/grad_norm:1023.278 - critic/lr(1e-4):0.100 - timing/update_actor:20.314 - actor/entropy_loss:0.433 - actor/pg_loss:-0.005 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:1.992 - actor/lr(1e-4):0.010 - critic/score/mean:0.004 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.004 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.000 - critic/advantages/max:2.360 - critic/advantages/min:-2.280 - critic/returns/mean:0.003 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.045 - critic/values/max:9.500 - critic/values/min:-14.000 - response_length/mean:239.133 - response_length/max:256.000 - response_length/min:77.000 - prompt_length/mean:104.883 - prompt_length/max:175.000 - prompt_length/min:68.000 step:1 - timing/gen:23.020 - timing/ref:4.322 - timing/values:5.953 - critic/kl:0.000 - critic/kl_coeff:0.001 - timing/adv:0.118 - timing/update_critic:15.646 - critic/vf_loss:18.472 - critic/vf_clipfrac:0.384 - critic/vpred_mean:1.038 - critic/grad_norm:942.924 - critic/lr(1e-4):0.100 - timing/update_actor:20.526 - actor/entropy_loss:0.440 - actor/pg_loss:0.000 - actor/pg_clipfrac:0.002 - actor/ppo_kl:0.000 - actor/grad_norm:2.060 - actor/lr(1e-4):0.010 - critic/score/mean:0.000 - critic/score/max:0.000 - critic/score/min:0.000 - critic/rewards/mean:0.000 - critic/rewards/max:0.000 - critic/rewards/min:0.000 - critic/advantages/mean:0.000 - critic/advantages/max:2.702 - critic/advantages/min:-2.616 - critic/returns/mean:0.000 - critic/returns/max:0.000 - critic/returns/min:0.000 - critic/values/mean:-2.280 - critic/values/max:11.000 - critic/values/min:-16.000 - response_length/mean:232.242 - response_length/max:256.000 - response_length/min:91.000 - prompt_length/mean:102.398 - prompt_length/max:185.000 - prompt_length/min:70.000 Checkout :ref:`algo-baseline-page` for full training and validation logs for reference. The checkpoint is saved at the following dir by default: ``checkpoints/${trainer.project_name}/${trainer.experiment_name}`` To enable ``wandb`` for experiment tracking, set the following configs: .. code-block:: bash trainer.logger=['console','wandb'] \ trainer.project_name=$YOUR_PROJECT_NAME \ trainer.experiment_name=$YOUR_RUN_NAME \ If you encounter out of memory issues with HBM less than 32GB, enable the following configs would help: .. code-block:: bash actor_rollout_ref.actor.ppo_micro_batch_size=1 \ critic.ppo_micro_batch_size=1 \ actor_rollout_ref.actor.fsdp_config.optimizer_offload=True \ critic.model.fsdp_config.optimizer_offload=True \ For the full set of configs, please refer to :ref:`config-explain-page` for detailed explaination and performance tuning. .. [1] The original paper (https://arxiv.org/pdf/2110.14168) mainly focuses on training a verifier (a reward model) to solve math problems via Best-of-N sampling. In this example, we train an RL agent using a rule-based reward model. .. [2] More training script examples for FSDP and Megatron-LM backend are stored in `examples/ppo_trainer <https://github.com/volcengine/verl/tree/main/examples/ppo_trainer>`_ directory.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/start/quickstart.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/start/quickstart.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 8018 }
PyTorch FSDP Backend ====================== We support PyTorch FSDP Backend by implementing various workers for actor, critic, reference, rollout and reward models. We also implement the ``FSDPVLLMShardingManager`` that reshard weight between FSDP and vLLM in `fsdp_vllm.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/hybrid_engine/fsdp_vllm.py>`_. **Pros** - Readily support various models. - Users only need to implement the corresponding ``dtensor_weight_loader`` for weight synchronization between FSDP and vLLM. While for ``hf_weight_loader``, users can directly apply any models supported both in HF and vLLM without any code change. - Easy to organize the forward and backward computation for each model. **Cons** - Poor scalability when it comes to large-scale models (e.g. Llama 70B and 405B) - The resharding overhead between actor and rollout could be larger than Megatron-LM backend. Due to the simplicity, we recommend using FSDP backend for algorithm research and prototyping. FSDP Workers -------------- ActorRolloutRefWorker ^^^^^^^^^^^^^^^^^^^^^ Actor/Rollout HybridEngine '''''''''''''''''''''''''' 1. HybridEngine, Actor and Rollout initialization API. .. code:: python @register(dispatch_mode=Dispatch.ONE_TO_ALL) def init_model(self): ``ONE_TO_ALL``: when calling the ``init_model`` function from the driver process, each worker (on a GPU) will execute the following model initialization process. The initialization details of HybridEngine, Actor and Rollout are highlighted below: 1. ``DataParallelPPOActor`` implements the simple PPO computation logics when the model is built with FSDP, including compute log prob, model update. 2. ``vLLMRollout`` support generation with vLLM. We modify the vLLM Engine and make it executed under SPMD to fit into our ``WorkerGroup`` design. 3. ``FSDPVLLMShardingManager`` a context manager to perform actual resharding between actor and rollout. See `source code <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/fsdp_workers.py#L42>`_. for more information. 1. Generate sequence and recompute log prob .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def generate_sequences(self, prompts: DataProto): - ``Dispatch.DP_COMPUTE_PROTO``: The data will be dispatched and collected along the DP dimension - In this function, the rollout model will perform auto-regressive generation and the actor model will recompute the old log prob for the generetad response. 3. Update actor model .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def update_actor(self, data: DataProto): - Update the actor model weight using PPO & entropy loss. ReferenceModel '''''''''''''' 1. Reference model initialization The reference model is initialized using the same function as the actor model without initializing the HybridEngine and Optimizer. Then the actor model is also wrapped by the ``DataParallelPPOActor``. 2. Compute reference log prob .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_ref_log_prob(self, data: DataProto): - In this function, the reference model will call the compute log prob function in ``DataParallelPPOActor`` to compute the reference log prob. CriticWorker and RewardWorker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Model initialization Quite similar to reference model. The CriticWorker will perform additional initialization for the Optimizer. 2. Compute Values for CriticWorker .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_values(self, data: DataProto): 3. Update Critic .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def update_critic(self, data: DataProto): 4. Compute Reward .. code:: python @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO) def compute_rm_score(self, data: DataProto): HybridShard ------------ We didn't support FSDP `HybridShard`. To support this, we may need to construct a 2D device mesh and test the corresponding ``dtensor_weight_loader`` and ``hf_weight_loader`` for each model.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/workers/fsdp_workers.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/workers/fsdp_workers.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 4166 }
Megatron-LM Backend ===================== We support Megatron Backend by implementing various workers for actor, critic, reference, rollout and reward models. We also implement the ``3DHybridEngine`` using Megatron-LM and vLLM in `megatron_vllm.py <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/hybrid_engine/megatron_vllm.py>`_. **Pros** - Support 3D parallelism and sequence parallelism for best scalablility and throughput. - 3D HybridEngine can significantly reduce peak memory usage and reduce weight synchronize overhead between actor and rollout. **Cons** - Users should implement their own models for Megatron-LM - Users should implement the corresponding weight_loader to - synchronize the model weight between actor (in Megatron) and rollout (in vLLM). - load weights from checkpoints to corresponding model in Megatron-LM Megatron Workers ---------------- MegatronWorker ^^^^^^^^^^^^^^ ``MegatronWorker`` is the base class of different megatron worker classes. In this class, ``get_megatron_global_info`` and ``get_megatron_rank_info`` function to retrive the 3D parallel world size and rank of each ``Worker`` running on specific GPU. These information will be used in transfer protocol for Megatron Backend. The following ``Worker`` class for different models will be utilized to construct the ``WorkerGroup`` . We implement various of APIs for each ``Worker`` class decorated by the ``@register(dispatch_mode=)`` . These APIs can be called by the ray driver process. The data can be correctly collect and dispatch following the ``dispatch_mode`` on each function. The supported dispatch_model (i.e., transfer protocols) can be found in `decorator.py <https://github.com/volcengine/verl/blob/main/verl/single_controller/base/decorator.py>`_. ActorRolloutRefWorker ^^^^^^^^^^^^^^^^^^^^^ This class is implemented for Actor/Rollout HybridEngine or for the reference model to initialize their model and perform computation. Actor/Rollout HybridEngine '''''''''''''''''''''''''' 1. HybridEngine, Actor and Rollout initialization API. .. code:: python @register(dispatch_mode=Dispatch.ONE_TO_ALL) def init_model(self): ``ONE_TO_ALL``: when calling the ``init_model`` function from the driver process, each worker (on a GPU) will execute the following model initialization process. The initialization details of HybridEngine, Actor and Rollout are highlighted below: 1. ``AllGatherPPModel`` holds memory buffer for both Actor and Rollout and support weight resharding between actor and rollout. 2. ``MegatronPPOActor`` implements the simple PPO computation logics when the model is built with Megatron, including compute log prob, model update. 3. ``vLLMRollout`` support generation with vLLM. We modify the vLLM Engine and make it executed under SPMD to fit into our ``WorkerGroup`` design. 4. ``MegatronVLLMShardingManager`` a context manager to perform actual resharding between actor and rollout. See `source code <https://github.com/volcengine/verl/blob/main/verl/trainer/ppo/workers/megatron_workers.py#L63>`_ for more information. .. code:: python # Initialize the 3D HybridEngine hybrid_engine = AllGatherPPModel(model_provider=megatron_actor_model_provider) # Fetch the model at current rank actor_module = hybrid_engine.this_rank_models ... # build actor model self.actor = MegatronPPOActor(config=self.config.actor, model_config=self.actor_model_config, megatron_config=megatron_config, actor_module=self.actor_module, actor_optimizer=self.actor_optimizer, actor_optimizer_config=self.actor_optim_config) # build rollout # rollout initialization rollout = vLLMRollout(actor_module=params, config=self.config.rollout, tokenizer=self.tokenizer, model_hf_config=self.actor_model_config, train_tp=mpu.get_tensor_model_parallel_world_size()) # perform weight resharding between actor and rollout sharding_manager = MegatronVLLMShardingManager(module=self.hybrid_engine, inference_engine=rollout.inference_engine, model_config=self.actor_model_config, layer_name_mapping=layer_name_mapping) ... 2. Generate sequence and recompute log prob .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_PP_AS_DP_PROTO) def generate_sequences(self, prompts: DataProto): - ``Dispatch.MEGATRON_PP_AS_DP_PROTO``: The PP dimension of the actor model will be regarded as DP dimension. Then the driver process will dispatch and collect the data according to this reorganization. This is because, in HybridEngine, the actor weight, which usually applied larger 3D parallel sizes, will be gathered along the PP dimension and TP dimension. Therefore, the corresponding data should be dispatched and collected through the 3D parallel group of the rollout model, rather than the actor model. However, the world_size and rank information can only be retrived from ``get_megatron_global_info`` and ``get_megatron_rank_info``, which records the 3D information for the actor model. Moreover, the data resharding inside TP dimension will be processed within the HybridEngine. - In this function, the rollout model will perform auto-regressive generation and the actor model will recompute the old log prob for the generetad response. 3. Update actor model .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def update_actor(self, data: DataProto): - ``Dispatch.MEGATRON_COMPUTE_PROTO``: User passes the data partitioned by DP dimension. The data is dispatched to all tp/pp ranks within the same dp group, and ultimately only collects output data from tp=0 and the last pp. - Update the actor model weight using PPO & entropy loss. ReferenceModel '''''''''''''' 1. Reference model initialization The reference model is initialized using the same function as the actor model without initializing the HybridEngine and Optimizer. Then the actor model is also wrapped by the ``MegatronPPOActor``. 2. Compute reference log prob .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_ref_log_prob(self, data: DataProto): - In this function, the reference model will call the compute log prob function in ``MegatronPPOActor`` to compute the reference log prob. CriticWorker and RewardWorker ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 1. Model initialization Quite similar to reference model. The CriticWorker will perform additional initialization for the Optimizer. 2. Compute Values for CriticWorker .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_values(self, data: DataProto): 3. Update Critic .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def update_critic(self, data: DataProto): 4. Compute Reward .. code:: python @register(dispatch_mode=Dispatch.MEGATRON_COMPUTE_PROTO) def compute_rm_score(self, data: DataProto): Context Parallel ---------------- This require the developer/contributor to implement the context parallel both in Megatron-LM and models.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/workers/megatron_workers.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/workers/megatron_workers.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 7477 }
PPO Ray Trainer =============== We implement the RayPPOTrainer, which is a trainer runs on the driver process on a single CPU/GPU node (default is CPU). The PPORayTrainer include 3 core functions for data preparation, WorkerGroup initialization and PPO training loop. Data Preparation ---------------- The ``PPORayTrainer``, as a single process, is responsible for loading a complete batch of samples (prompts) from the dataset and then dispatch to different worker_groups runnning on different GPUs. To generalize the data loading, we implement the ``RLHFDataset`` class to load the preprocessed parquet files, apply chat templates to the prompts, add padding, truncate prompts that exceed max prompt length and then tokenize. .. code:: python self.train_dataset = RLHFDataset(parquet_files=self.config.data.train_files, tokenizer=self.tokenizer, prompt_key=self.config.data.prompt_key, max_prompt_length=self.config.data.max_prompt_length, filter_prompts=True, return_raw_chat=self.config.data.get('return_raw_chat', False), truncation='error') Then, the dataloader will iterate the dataset under PPO mini batch size. WorkerGroup Initialization -------------------------- We first introduce a basic implementation of initializing the ``WorkerGroup`` of the actor model on a given set of GPUs. .. code:: python # max_colocate_count means the number of WorkerGroups (i.e. processes) in each RayResourcePool # For FSDP backend, we recommend using max_colocate_count=1 that merge all WorkerGroups into one. # For Megatron backend, we recommend using max_colocate_count>1 that can utilize different WorkerGroup for differnt models resource_pool = RayResourcePool(process_on_nodes=[config.trainer.n_gpus_per_node] * config.trainer.nnodes, use_gpu=True, max_colocate_count=1) # define actor rollout cls to be init on remote actor_rollout_cls = RayClassWithInitArgs(cls=ActorRolloutWorker) # define actor_rollout worker group actor_rollout_worker_group = MegatronRayWorkerGroup(resource_pool=resource_pool, ray_cls_with_init=actor_rollout_cls, default_megatron_kwargs=config.actor_rollout.megatron) Different WorkerGroups, like ``actor_rollout_worker_group`` , ``critic_worker_group`` and ``ref_worker_group`` lies on a separate process in the above implementation. The driver process can then call the distributed compute function within the ``actor_rollout_worker_group`` and other roles to construct the RL training loop. For models colocated in the same set of GPUs, we further provide a fine-grain optimization, which merge the ``worker_group`` of different roles in the same process. This optimization can save the redundant CUDA/distributed context in different processes. .. code:: python # initialize WorkerGroup # NOTE: if you want to use a different resource pool for each role, which can support different parallel size, # you should not use `create_colocated_worker_cls`. Instead, directly pass different resource pool to different worker groups. # See TODO(url) for more information. all_wg = {} for resource_pool, class_dict in self.resource_pool_to_cls.items(): worker_dict_cls = create_colocated_worker_cls(class_dict=class_dict) wg_dict = self.ray_worker_group_cls(resource_pool=resource_pool, ray_cls_with_init=worker_dict_cls) spawn_wg = wg_dict.spawn(prefix_set=class_dict.keys()) all_wg.update(spawn_wg) if self.use_critic: self.critic_wg = all_wg['critic'] self.critic_wg.init_model() if self.use_reference_policy: self.ref_policy_wg = all_wg['ref'] self.ref_policy_wg.init_model() if self.use_rm: self.rm_wg = all_wg['rm'] self.rm_wg.init_model() # we should create rollout at the end so that vllm can have a better estimation of kv cache memory self.actor_rollout_wg = all_wg['actor_rollout'] self.actor_rollout_wg.init_model() .. note:: For megatron backend, if we merge the ``worker_groups`` into the same processes, all the roles will utilize the same 3D parallel size. To optimize this, we may need to maintain several 3D process groups for each role in the same distributed context. If you want to use different 3D parallel size for different roles, please follow the similar architecture of the first code block to initialize each role's ``worker_group`` PPO Training Loop ----------------- We implement the PPO training loop by calling the functions in worker_group of each role. The input and output data of each function is a ``DataProto`` object implemented in `protocol.py <https://github.com/volcengine/verl/blob/main/verl/protocol.py>`_. In the training loop, trainer will dispatch/collect the data to/from different GPUs following the transfer protocols wrapped in the workers' functions. The computation of PPO micro batches is processed in ``update_actor`` and ``update_critic`` functions. To extend to other RLHF algorithms, such as DPO, GRPO, please refer to :doc:`../advance/dpo_extension`. .. code:: python def fit(self): """ The training loop of PPO. The driver process only need to call the compute functions of the worker group through RPC to construct the PPO dataflow. The light-weight advantage computation is done on the driver process. """ from verl.utils.tracking import Tracking from omegaconf import OmegaConf logger = Tracking(project_name=self.config.trainer.project_name, experiment_name=self.config.trainer.experiment_name, default_backend=self.config.trainer.logger, config=OmegaConf.to_container(self.config, resolve=True)) global_steps = 0 # perform validation before training # currently, we only support validation using the reward_function. if self.val_reward_fn is not None: val_metrics = self._validate() pprint(f'Initial validation metrics: {val_metrics}') for epoch in range(self.config.trainer.total_epochs): for batch_dict in self.train_dataloader: metrics = {} batch: DataProto = DataProto.from_single_dict(batch_dict) # batch = batch.to('cuda') # pop those keys for generation gen_batch = batch.pop(batch_keys=['input_ids', 'attention_mask', 'position_ids']) # generate a batch with Timer(name='gen', logger=None) as timer: gen_batch_output = self.actor_rollout_wg.generate_sequences(gen_batch) metrics['timing/gen'] = timer.last batch = batch.union(gen_batch_output) if self.use_reference_policy: # compute reference log_prob with Timer(name='ref', logger=None) as timer: ref_log_prob = self.ref_policy_wg.compute_ref_log_prob(batch) batch = batch.union(ref_log_prob) metrics['timing/ref'] = timer.last # compute values with Timer(name='values', logger=None) as timer: values = self.critic_wg.compute_values(batch) batch = batch.union(values) metrics['timing/values'] = timer.last with Timer(name='adv', logger=None) as timer: # compute scores. Support both model and function-based. # We first compute the scores using reward model. Then, we call reward_fn to combine # the results from reward model and rule-based results. if self.use_rm: # we first compute reward model score reward_tensor = self.rm_wg.compute_rm_score(batch) batch = batch.union(reward_tensor) # we combine with rule-based rm reward_tensor = self.reward_fn(batch) batch.batch['token_level_scores'] = reward_tensor # compute rewards. apply_kl_penalty if available batch, kl_metrics = apply_kl_penalty(batch, kl_ctrl=self.kl_ctrl, kl_penalty=self.config.algorithm.kl_penalty) metrics.update(kl_metrics) # compute advantages, executed on the driver process batch = compute_advantage(batch, self.config.algorithm.gamma, self.config.algorithm.lam, adv_estimator=self.config.algorithm.adv_estimator) metrics['timing/adv'] = timer.last # update critic if self.use_critic: with Timer(name='update_critic', logger=None) as timer: critic_output = self.critic_wg.update_critic(batch) metrics['timing/update_critic'] = timer.last critic_output_metrics = reduce_metrics(critic_output.meta_info['metrics']) metrics.update(critic_output_metrics) # implement critic warmup if self.config.trainer.critic_warmup <= global_steps: # update actor with Timer(name='update_actor', logger=None) as timer: actor_output = self.actor_rollout_wg.update_actor(batch) metrics['timing/update_actor'] = timer.last actor_output_metrics = reduce_metrics(actor_output.meta_info['metrics']) metrics.update(actor_output_metrics) # validate if self.val_reward_fn is not None and (global_steps + 1) % self.config.trainer.test_freq == 0: with Timer(name='testing', logger=None) as timer: val_metrics: dict = self._validate() val_metrics = {f'val/{key}': val for key, val in val_metrics.items()} metrics['timing/testing'] = timer.last metrics.update(val_metrics) # collect metrics data_metrics = compute_data_metrics(batch=batch) metrics.update(data_metrics) # TODO: make a canonical logger that supports various backend logger.log(data=metrics, step=global_steps) if self.config.trainer.save_freq > 0 and (global_steps + 1) % self.config.trainer.save_freq == 0: actor_local_path = os.path.join(self.config.trainer.default_local_dir, 'actor', f'global_step_{global_steps}') actor_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'actor') self.actor_rollout_wg.save_checkpoint(actor_local_path, actor_remote_path) if self.use_critic: critic_local_path = os.path.join(self.config.trainer.default_local_dir, 'critic', f'global_step_{global_steps}') critic_remote_path = os.path.join(self.config.trainer.default_hdfs_dir, 'critic') self.critic_wg.save_checkpoint(critic_local_path, critic_remote_path) global_steps += 1 # perform validation after training if self.val_reward_fn is not None: val_metrics = self._validate() pprint(f'Final validation metrics: {val_metrics}')
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/docs/workers/ray_trainer.rst", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/docs/workers/ray_trainer.rst", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 12036 }
# Split Placement Example Here we introduce how to run the naive implementation of the split placement of PPO algorithm. We will release the complete version of flexible placement in the near future. For quickstart, you can only follow Step 2 to modify the code and then follow Step 4 to execute the split placement example. ### Step 1: Placing the models to different GPUs Specify the placement and resource allocation. In the example, we place the actor and reference in the first half of the GPUs while map the critic and reward model (if any) to the second half of the GPUs. ```python actor_rollout_ref_pool_id = 'actor_rollout_ref_pool' critic_pool_id = 'critic_pool' if config.trainer.nnodes // 2 == 0 and config.trainer.n_gpus_per_node // 2 > 0: resource_pool_spec = { actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes, critic_pool_id: [config.trainer.n_gpus_per_node // 2] * config.trainer.nnodes, } else: resource_pool_spec = { actor_rollout_ref_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2), critic_pool_id: [config.trainer.n_gpus_per_node] * (config.trainer.nnodes // 2), } print(f'resource_pool_spec: {resource_pool_spec}') mapping = { Role.ActorRollout: actor_rollout_ref_pool_id, Role.Critic: critic_pool_id, Role.RefPolicy: actor_rollout_ref_pool_id, } mapping[Role.RewardModel] = critic_pool_id ``` ### Step 2: Make the models executed asynchronously Based on the model placement, we need to make the models executed asynchronously. To do so, you need to turn off the `blocking` flag (i.e., `blocking=False`) in our decorator of some model operations. For example, we hope the actor update and critic update can be executed in parallel, then we need to make the following modification in `fsdp_workers.py` ``` @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False) def update_actor(self, data: DataProto): ... @register(dispatch_mode=Dispatch.DP_COMPUTE_PROTO, blocking=False) def update_critic(self, data: DataProto): ... ``` We can also parallelize the computation of `ref_log_prob` and `values` and `rewards` in the split placement. For simplicity of the tutorial, we ### Step 3: Execute these operation in parallel in the single controller process To implement the parallel execution of the actor and critic update, the only thing we need to modify in the `ray_trainer.py` is to `get` the concurrent `futures` on the single controller process. ```python critic_output = critic_output.get() actor_output = actor_output.get() ``` ### Step 4: Run the split placement example ``` bash run_deepseek7b_llm.sh ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/examples/split_placement/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/examples/split_placement/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 2686 }
# Models Common modelzoo such as huggingface/transformers stuggles when using Pytorch native model parallelism. Following the design principle of vLLM, we keep a simple, parallelizable, highly-optimized with packed inputs in verl. ## Adding a New Huggingface Model ### Step 1: Copy the model file from HF to verl - Add a new file under verl/models/hf - Copy ONLY the model file from huggingface/transformers/models to verl/models/hf ### Step 2: Modify the model file to use packed inputs - Remove all the code related to inference (kv cache) - Modify the inputs to include only - input_ids (total_nnz,) - cu_seqlens (total_nnz + 1,) - max_seqlen_in_batch: int - Note that this requires using flash attention with causal mask. ### Step 2.5: Add tests - Add a test to compare this version and the huggingface version - Following the infrastructure and add tests to tests/models/hf ### Step 3: Add a function to apply tensor parallelism - Please follow - https://pytorch.org/docs/stable/distributed.tensor.parallel.html - https://pytorch.org/tutorials/intermediate/TP_tutorial.html - General comments - Tensor Parallelism in native Pytorch is NOT auto-parallelism. The way it works is to specify how model parameters and input/output reshards using configs. These configs are then registered as hooks to perform input/output resharding before/after model forward. ### Step 4: Add a function to apply data parallelism - Please use FSDP2 APIs - See demo here https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L413 ### Step 5: Add a function to apply pipeline parallelism - Comes in Pytorch 2.4 - Currently only in alpha in nightly version - Check torchtitan for more details
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/verl/models/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/verl/models/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 1742 }
# Contributor Covenant Code of Conduct ## Our Pledge We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances of any kind * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Enforcement Responsibilities Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at `hoshihiyouga AT gmail DOT com`. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 2.0, available at https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). [homepage]: https://www.contributor-covenant.org For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/CODE_OF_CONDUCT.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/CODE_OF_CONDUCT.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 5232 }
# Contributing to LLaMA Factory Everyone is welcome to contribute, and we value everybody's contribution. Code contributions are not the only way to help the community. Answering questions, helping others, and improving the documentation are also immensely valuable. It also helps us if you spread the word! Reference the library in blog posts about the awesome projects it made possible, shout out on Twitter every time it has helped you, or simply ⭐️ the repository to say thank you. However you choose to contribute, please be mindful and respect our [code of conduct](CODE_OF_CONDUCT.md). **This guide was heavily inspired by [transformers guide to contributing](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md).** ## Ways to contribute There are several ways you can contribute to LLaMA Factory: * Fix outstanding issues with the existing code. * Submit issues related to bugs or desired new features. * Contribute to the examples or to the documentation. ### Style guide LLaMA Factory follows the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html), check it for details. ### Create a Pull Request 1. Fork the [repository](https://github.com/hiyouga/LLaMA-Factory) by clicking on the [Fork](https://github.com/hiyouga/LLaMA-Factory/fork) button on the repository's page. This creates a copy of the code under your GitHub user account. 2. Clone your fork to your local disk, and add the base repository as a remote: ```bash git clone [email protected]:[username]/LLaMA-Factory.git cd LLaMA-Factory git remote add upstream https://github.com/hiyouga/LLaMA-Factory.git ``` 3. Create a new branch to hold your development changes: ```bash git checkout -b dev_your_branch ``` 4. Set up a development environment by running the following command in a virtual environment: ```bash pip install -e ".[dev]" ``` If LLaMA Factory was already installed in the virtual environment, remove it with `pip uninstall llamafactory` before reinstalling it in editable mode with the -e flag. 5. Check code before commit: ```bash make commit make style && make quality make test ``` 6. Submit changes: ```bash git add . git commit -m "commit message" git fetch upstream git rebase upstream/main git push -u origin dev_your_branch ``` 7. Create a merge request from your branch `dev_your_branch` at [origin repo](https://github.com/hiyouga/LLaMA-Factory).
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/CONTRIBUTING.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/CONTRIBUTING.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 2406 }
# What does this PR do? Fixes # (issue) ## Before submitting - [ ] Did you read the [contributor guideline](https://github.com/hiyouga/LLaMA-Factory/blob/main/.github/CONTRIBUTING.md)? - [ ] Did you write any new necessary tests?
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/PULL_REQUEST_TEMPLATE.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/PULL_REQUEST_TEMPLATE.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 232 }
# Reporting Security Issues To report a security issue, please use the GitHub Security Advisory ["Report a Vulnerability"](https://github.com/hiyouga/LLaMA-Factory/security/advisories/new) tab. We will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance. Report security bugs in third-party modules to the person or team maintaining the module.
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/.github/SECURITY.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/.github/SECURITY.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 547 }
The [dataset_info.json](dataset_info.json) contains all available datasets. If you are using a custom dataset, please **make sure** to add a *dataset description* in `dataset_info.json` and specify `dataset: dataset_name` before training to use it. Currently we support datasets in **alpaca** and **sharegpt** format. ```json "dataset_name": { "hf_hub_url": "the name of the dataset repository on the Hugging Face hub. (if specified, ignore script_url and file_name)", "ms_hub_url": "the name of the dataset repository on the Model Scope hub. (if specified, ignore script_url and file_name)", "script_url": "the name of the directory containing a dataset loading script. (if specified, ignore file_name)", "file_name": "the name of the dataset folder or dataset file in this directory. (required if above are not specified)", "formatting": "the format of the dataset. (optional, default: alpaca, can be chosen from {alpaca, sharegpt})", "ranking": "whether the dataset is a preference dataset or not. (default: False)", "subset": "the name of the subset. (optional, default: None)", "split": "the name of dataset split to be used. (optional, default: train)", "folder": "the name of the folder of the dataset repository on the Hugging Face hub. (optional, default: None)", "num_samples": "the number of samples in the dataset to be used. (optional, default: None)", "columns (optional)": { "prompt": "the column name in the dataset containing the prompts. (default: instruction)", "query": "the column name in the dataset containing the queries. (default: input)", "response": "the column name in the dataset containing the responses. (default: output)", "history": "the column name in the dataset containing the histories. (default: None)", "messages": "the column name in the dataset containing the messages. (default: conversations)", "system": "the column name in the dataset containing the system prompts. (default: None)", "tools": "the column name in the dataset containing the tool description. (default: None)", "images": "the column name in the dataset containing the image inputs. (default: None)", "videos": "the column name in the dataset containing the videos inputs. (default: None)", "chosen": "the column name in the dataset containing the chosen answers. (default: None)", "rejected": "the column name in the dataset containing the rejected answers. (default: None)", "kto_tag": "the column name in the dataset containing the kto tags. (default: None)" }, "tags (optional, used for the sharegpt format)": { "role_tag": "the key in the message represents the identity. (default: from)", "content_tag": "the key in the message represents the content. (default: value)", "user_tag": "the value of the role_tag represents the user. (default: human)", "assistant_tag": "the value of the role_tag represents the assistant. (default: gpt)", "observation_tag": "the value of the role_tag represents the tool results. (default: observation)", "function_tag": "the value of the role_tag represents the function call. (default: function_call)", "system_tag": "the value of the role_tag represents the system prompt. (default: system, can override system column)" } } ``` ## Alpaca Format ### Supervised Fine-Tuning Dataset * [Example dataset](alpaca_en_demo.json) In supervised fine-tuning, the `instruction` column will be concatenated with the `input` column and used as the human prompt, then the human prompt would be `instruction\ninput`. The `output` column represents the model response. The `system` column will be used as the system prompt if specified. The `history` column is a list consisting of string tuples representing prompt-response pairs in the history messages. Note that the responses in the history **will also be learned by the model** in supervised fine-tuning. ```json [ { "instruction": "human instruction (required)", "input": "human input (optional)", "output": "model response (required)", "system": "system prompt (optional)", "history": [ ["human instruction in the first round (optional)", "model response in the first round (optional)"], ["human instruction in the second round (optional)", "model response in the second round (optional)"] ] } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "columns": { "prompt": "instruction", "query": "input", "response": "output", "system": "system", "history": "history" } } ``` ### Pre-training Dataset - [Example dataset](c4_demo.json) In pre-training, only the `text` column will be used for model learning. ```json [ {"text": "document"}, {"text": "document"} ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "columns": { "prompt": "text" } } ``` ### Preference Dataset Preference datasets are used for reward modeling, DPO training, ORPO and SimPO training. It requires a better response in `chosen` column and a worse response in `rejected` column. ```json [ { "instruction": "human instruction (required)", "input": "human input (optional)", "chosen": "chosen answer (required)", "rejected": "rejected answer (required)" } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "ranking": true, "columns": { "prompt": "instruction", "query": "input", "chosen": "chosen", "rejected": "rejected" } } ``` ### KTO Dataset An additional column `kto_tag` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. ### Multimodal Image Dataset An additional column `images` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. ### Multimodal Video Dataset An additional column `videos` is required. Please refer to the [sharegpt](#sharegpt-format) format for details. ## Sharegpt Format ### Supervised Fine-Tuning Dataset - [Example dataset](glaive_toolcall_en_demo.json) Compared to the alpaca format, the sharegpt format allows the datasets have **more roles**, such as human, gpt, observation and function. They are presented in a list of objects in the `conversations` column. Note that the human and observation should appear in odd positions, while gpt and function should appear in even positions. ```json [ { "conversations": [ { "from": "human", "value": "human instruction" }, { "from": "function_call", "value": "tool arguments" }, { "from": "observation", "value": "tool result" }, { "from": "gpt", "value": "model response" } ], "system": "system prompt (optional)", "tools": "tool description (optional)" } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "system": "system", "tools": "tools" } } ``` ### Pre-training Dataset Not yet supported, please use the [alpaca](#alpaca-format) format. ### Preference Dataset - [Example dataset](dpo_en_demo.json) Preference datasets in sharegpt format also require a better message in `chosen` column and a worse message in `rejected` column. ```json [ { "conversations": [ { "from": "human", "value": "human instruction" }, { "from": "gpt", "value": "model response" }, { "from": "human", "value": "human instruction" } ], "chosen": { "from": "gpt", "value": "chosen answer (required)" }, "rejected": { "from": "gpt", "value": "rejected answer (required)" } } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "formatting": "sharegpt", "ranking": true, "columns": { "messages": "conversations", "chosen": "chosen", "rejected": "rejected" } } ``` ### KTO Dataset - [Example dataset](kto_en_demo.json) KTO datasets require a extra `kto_tag` column containing the boolean human feedback. ```json [ { "conversations": [ { "from": "human", "value": "human instruction" }, { "from": "gpt", "value": "model response" } ], "kto_tag": "human feedback [true/false] (required)" } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "kto_tag": "kto_tag" } } ``` ### Multimodal Image Dataset - [Example dataset](mllm_demo.json) Multimodal image datasets require a `images` column containing the paths to the input images. The number of images should be identical to the `<image>` tokens in the conversations. ```json [ { "conversations": [ { "from": "human", "value": "<image>human instruction" }, { "from": "gpt", "value": "model response" } ], "images": [ "image path (required)" ] } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "images": "images" } } ``` ### Multimodal Video Dataset - [Example dataset](mllm_video_demo.json) Multimodal video datasets require a `videos` column containing the paths to the input videos. The number of videos should be identical to the `<video>` tokens in the conversations. ```json [ { "conversations": [ { "from": "human", "value": "<video>human instruction" }, { "from": "gpt", "value": "model response" } ], "videos": [ "video path (required)" ] } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "videos": "videos" } } ``` ### OpenAI Format The openai format is simply a special case of the sharegpt format, where the first message may be a system prompt. ```json [ { "messages": [ { "role": "system", "content": "system prompt (optional)" }, { "role": "user", "content": "human instruction" }, { "role": "assistant", "content": "model response" } ] } ] ``` Regarding the above dataset, the *dataset description* in `dataset_info.json` should be: ```json "dataset_name": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "messages" }, "tags": { "role_tag": "role", "content_tag": "content", "user_tag": "user", "assistant_tag": "assistant", "system_tag": "system" } } ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/data/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/data/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 11390 }
[dataset_info.json](dataset_info.json) 包含了所有可用的数据集。如果您希望使用自定义数据集,请**务必**在 `dataset_info.json` 文件中添加*数据集描述*,并通过修改 `dataset: 数据集名称` 配置来使用数据集。 目前我们支持 **alpaca** 格式和 **sharegpt** 格式的数据集。 ```json "数据集名称": { "hf_hub_url": "Hugging Face 的数据集仓库地址(若指定,则忽略 script_url 和 file_name)", "ms_hub_url": "ModelScope 的数据集仓库地址(若指定,则忽略 script_url 和 file_name)", "script_url": "包含数据加载脚本的本地文件夹名称(若指定,则忽略 file_name)", "file_name": "该目录下数据集文件夹或文件的名称(若上述参数未指定,则此项必需)", "formatting": "数据集格式(可选,默认:alpaca,可以为 alpaca 或 sharegpt)", "ranking": "是否为偏好数据集(可选,默认:False)", "subset": "数据集子集的名称(可选,默认:None)", "split": "所使用的数据集切分(可选,默认:train)", "folder": "Hugging Face 仓库的文件夹名称(可选,默认:None)", "num_samples": "该数据集所使用的样本数量。(可选,默认:None)", "columns(可选)": { "prompt": "数据集代表提示词的表头名称(默认:instruction)", "query": "数据集代表请求的表头名称(默认:input)", "response": "数据集代表回答的表头名称(默认:output)", "history": "数据集代表历史对话的表头名称(默认:None)", "messages": "数据集代表消息列表的表头名称(默认:conversations)", "system": "数据集代表系统提示的表头名称(默认:None)", "tools": "数据集代表工具描述的表头名称(默认:None)", "images": "数据集代表图像输入的表头名称(默认:None)", "videos": "数据集代表视频输入的表头名称(默认:None)", "chosen": "数据集代表更优回答的表头名称(默认:None)", "rejected": "数据集代表更差回答的表头名称(默认:None)", "kto_tag": "数据集代表 KTO 标签的表头名称(默认:None)" }, "tags(可选,用于 sharegpt 格式)": { "role_tag": "消息中代表发送者身份的键名(默认:from)", "content_tag": "消息中代表文本内容的键名(默认:value)", "user_tag": "消息中代表用户的 role_tag(默认:human)", "assistant_tag": "消息中代表助手的 role_tag(默认:gpt)", "observation_tag": "消息中代表工具返回结果的 role_tag(默认:observation)", "function_tag": "消息中代表工具调用的 role_tag(默认:function_call)", "system_tag": "消息中代表系统提示的 role_tag(默认:system,会覆盖 system column)" } } ``` ## Alpaca 格式 ### 指令监督微调数据集 - [样例数据集](alpaca_zh_demo.json) 在指令监督微调时,`instruction` 列对应的内容会与 `input` 列对应的内容拼接后作为人类指令,即人类指令为 `instruction\ninput`。而 `output` 列对应的内容为模型回答。 如果指定,`system` 列对应的内容将被作为系统提示词。 `history` 列是由多个字符串二元组构成的列表,分别代表历史消息中每轮对话的指令和回答。注意在指令监督微调时,历史消息中的回答内容**也会被用于模型学习**。 ```json [ { "instruction": "人类指令(必填)", "input": "人类输入(选填)", "output": "模型回答(必填)", "system": "系统提示词(选填)", "history": [ ["第一轮指令(选填)", "第一轮回答(选填)"], ["第二轮指令(选填)", "第二轮回答(选填)"] ] } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "columns": { "prompt": "instruction", "query": "input", "response": "output", "system": "system", "history": "history" } } ``` ### 预训练数据集 - [样例数据集](c4_demo.json) 在预训练时,只有 `text` 列中的内容会用于模型学习。 ```json [ {"text": "document"}, {"text": "document"} ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "columns": { "prompt": "text" } } ``` ### 偏好数据集 偏好数据集用于奖励模型训练、DPO 训练、ORPO 训练和 SimPO 训练。 它需要在 `chosen` 列中提供更优的回答,并在 `rejected` 列中提供更差的回答。 ```json [ { "instruction": "人类指令(必填)", "input": "人类输入(选填)", "chosen": "优质回答(必填)", "rejected": "劣质回答(必填)" } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "ranking": true, "columns": { "prompt": "instruction", "query": "input", "chosen": "chosen", "rejected": "rejected" } } ``` ### KTO 数据集 KTO 数据集需要提供额外的 `kto_tag` 列。详情请参阅 [sharegpt](#sharegpt-格式)。 ### 多模态图像数据集 多模态图像数据集需要提供额外的 `images` 列。详情请参阅 [sharegpt](#sharegpt-格式)。 ### 多模态视频数据集 多模态视频数据集需要提供额外的 `videos` 列。详情请参阅 [sharegpt](#sharegpt-格式)。 ## Sharegpt 格式 ### 指令监督微调数据集 - [样例数据集](glaive_toolcall_zh_demo.json) 相比 alpaca 格式的数据集,sharegpt 格式支持**更多的角色种类**,例如 human、gpt、observation、function 等等。它们构成一个对象列表呈现在 `conversations` 列中。 注意其中 human 和 observation 必须出现在奇数位置,gpt 和 function 必须出现在偶数位置。 ```json [ { "conversations": [ { "from": "human", "value": "人类指令" }, { "from": "function_call", "value": "工具参数" }, { "from": "observation", "value": "工具结果" }, { "from": "gpt", "value": "模型回答" } ], "system": "系统提示词(选填)", "tools": "工具描述(选填)" } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "system": "system", "tools": "tools" } } ``` ### 预训练数据集 尚不支持,请使用 [alpaca](#alpaca-格式) 格式。 ### 偏好数据集 - [样例数据集](dpo_zh_demo.json) Sharegpt 格式的偏好数据集同样需要在 `chosen` 列中提供更优的消息,并在 `rejected` 列中提供更差的消息。 ```json [ { "conversations": [ { "from": "human", "value": "人类指令" }, { "from": "gpt", "value": "模型回答" }, { "from": "human", "value": "人类指令" } ], "chosen": { "from": "gpt", "value": "优质回答" }, "rejected": { "from": "gpt", "value": "劣质回答" } } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "formatting": "sharegpt", "ranking": true, "columns": { "messages": "conversations", "chosen": "chosen", "rejected": "rejected" } } ``` ### KTO 数据集 - [样例数据集](kto_en_demo.json) KTO 数据集需要额外添加一个 `kto_tag` 列,包含 bool 类型的人类反馈。 ```json [ { "conversations": [ { "from": "human", "value": "人类指令" }, { "from": "gpt", "value": "模型回答" } ], "kto_tag": "人类反馈 [true/false](必填)" } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "kto_tag": "kto_tag" } } ``` ### 多模态图像数据集 - [样例数据集](mllm_demo.json) 多模态图像数据集需要额外添加一个 `images` 列,包含输入图像的路径。 注意图片的数量必须与文本中所有 `<image>` 标记的数量严格一致。 ```json [ { "conversations": [ { "from": "human", "value": "<image>人类指令" }, { "from": "gpt", "value": "模型回答" } ], "images": [ "图像路径(必填)" ] } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "images": "images" } } ``` ### 多模态视频数据集 - [样例数据集](mllm_video_demo.json) 多模态视频数据集需要额外添加一个 `videos` 列,包含输入视频的路径。 注意视频的数量必须与文本中所有 `<video>` 标记的数量严格一致。 ```json [ { "conversations": [ { "from": "human", "value": "<video>人类指令" }, { "from": "gpt", "value": "模型回答" } ], "videos": [ "视频路径(必填)" ] } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "conversations", "videos": "videos" } } ``` ### OpenAI 格式 OpenAI 格式仅仅是 sharegpt 格式的一种特殊情况,其中第一条消息可能是系统提示词。 ```json [ { "messages": [ { "role": "system", "content": "系统提示词(选填)" }, { "role": "user", "content": "人类指令" }, { "role": "assistant", "content": "模型回答" } ] } ] ``` 对于上述格式的数据,`dataset_info.json` 中的*数据集描述*应为: ```json "数据集名称": { "file_name": "data.json", "formatting": "sharegpt", "columns": { "messages": "messages" }, "tags": { "role_tag": "role", "content_tag": "content", "user_tag": "user", "assistant_tag": "assistant", "system_tag": "system" } } ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/data/README_zh.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/data/README_zh.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 7307 }
We provide diverse examples about fine-tuning LLMs. Make sure to execute these commands in the `LLaMA-Factory` directory. ## Table of Contents - [LoRA Fine-Tuning](#lora-fine-tuning) - [QLoRA Fine-Tuning](#qlora-fine-tuning) - [Full-Parameter Fine-Tuning](#full-parameter-fine-tuning) - [Merging LoRA Adapters and Quantization](#merging-lora-adapters-and-quantization) - [Inferring LoRA Fine-Tuned Models](#inferring-lora-fine-tuned-models) - [Extras](#extras) Use `CUDA_VISIBLE_DEVICES` (GPU) or `ASCEND_RT_VISIBLE_DEVICES` (NPU) to choose computing devices. By default, LLaMA-Factory uses all visible computing devices. ## Examples ### LoRA Fine-Tuning #### (Continuous) Pre-Training ```bash llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml ``` #### Supervised Fine-Tuning ```bash llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml ``` #### Multimodal Supervised Fine-Tuning ```bash llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml llamafactory-cli train examples/train_lora/qwen2vl_lora_sft.yaml ``` #### DPO/ORPO/SimPO Training ```bash llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml ``` #### Multimodal DPO/ORPO/SimPO Training ```bash llamafactory-cli train examples/train_lora/qwen2vl_lora_dpo.yaml ``` #### Reward Modeling ```bash llamafactory-cli train examples/train_lora/llama3_lora_reward.yaml ``` #### PPO Training ```bash llamafactory-cli train examples/train_lora/llama3_lora_ppo.yaml ``` #### KTO Training ```bash llamafactory-cli train examples/train_lora/llama3_lora_kto.yaml ``` #### Preprocess Dataset It is useful for large dataset, use `tokenized_path` in config to load the preprocessed dataset. ```bash llamafactory-cli train examples/train_lora/llama3_preprocess.yaml ``` #### Evaluating on MMLU/CMMLU/C-Eval Benchmarks ```bash llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml ``` #### Supervised Fine-Tuning on Multiple Nodes ```bash FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml ``` #### Supervised Fine-Tuning with DeepSpeed ZeRO-3 (Weight Sharding) ```bash FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml ``` ### QLoRA Fine-Tuning #### Supervised Fine-Tuning with 4/8-bit Bitsandbytes/HQQ/EETQ Quantization (Recommended) ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml ``` #### Supervised Fine-Tuning with 4/8-bit GPTQ Quantization ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml ``` #### Supervised Fine-Tuning with 4-bit AWQ Quantization ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml ``` #### Supervised Fine-Tuning with 2-bit AQLM Quantization ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml ``` ### Full-Parameter Fine-Tuning #### Supervised Fine-Tuning on Single Node ```bash FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yaml ``` #### Supervised Fine-Tuning on Multiple Nodes ```bash FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml ``` #### Multimodal Supervised Fine-Tuning ```bash FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml ``` ### Merging LoRA Adapters and Quantization #### Merge LoRA Adapters Note: DO NOT use quantized model or `quantization_bit` when merging LoRA adapters. ```bash llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml ``` #### Quantizing Model using AutoGPTQ ```bash llamafactory-cli export examples/merge_lora/llama3_gptq.yaml ``` ### Inferring LoRA Fine-Tuned Models #### Batch Generation using vLLM Tensor Parallel ``` python scripts/vllm_infer.py --model_name_or_path path_to_merged_model --dataset alpaca_en_demo ``` #### Use CLI ChatBox ```bash llamafactory-cli chat examples/inference/llama3_lora_sft.yaml ``` #### Use Web UI ChatBox ```bash llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml ``` #### Launch OpenAI-style API ```bash llamafactory-cli api examples/inference/llama3_lora_sft.yaml ``` ### Extras #### Full-Parameter Fine-Tuning using GaLore ```bash llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml ``` #### Full-Parameter Fine-Tuning using BAdam ```bash llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml ``` #### Full-Parameter Fine-Tuning using Adam-mini ```bash llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml ``` #### LoRA+ Fine-Tuning ```bash llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml ``` #### PiSSA Fine-Tuning ```bash llamafactory-cli train examples/extras/pissa/llama3_lora_sft.yaml ``` #### Mixture-of-Depths Fine-Tuning ```bash llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml ``` #### LLaMA-Pro Fine-Tuning ```bash bash examples/extras/llama_pro/expand.sh llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml ``` #### FSDP+QLoRA Fine-Tuning ```bash bash examples/extras/fsdp_qlora/train.sh ``` #### Computing BLEU and ROUGE Scores ```bash llamafactory-cli train examples/extras/nlg_eval/llama3_lora_predict.yaml ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/examples/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/examples/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 5618 }
我们提供了多样化的大模型微调示例脚本。 请确保在 `LLaMA-Factory` 目录下执行下述命令。 ## 目录 - [LoRA 微调](#lora-微调) - [QLoRA 微调](#qlora-微调) - [全参数微调](#全参数微调) - [合并 LoRA 适配器与模型量化](#合并-lora-适配器与模型量化) - [推理 LoRA 模型](#推理-lora-模型) - [杂项](#杂项) 使用 `CUDA_VISIBLE_DEVICES`(GPU)或 `ASCEND_RT_VISIBLE_DEVICES`(NPU)选择计算设备。 LLaMA-Factory 默认使用所有可见的计算设备。 ## 示例 ### LoRA 微调 #### (增量)预训练 ```bash llamafactory-cli train examples/train_lora/llama3_lora_pretrain.yaml ``` #### 指令监督微调 ```bash llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml ``` #### 多模态指令监督微调 ```bash llamafactory-cli train examples/train_lora/llava1_5_lora_sft.yaml llamafactory-cli train examples/train_lora/qwen2vl_lora_sft.yaml ``` #### DPO/ORPO/SimPO 训练 ```bash llamafactory-cli train examples/train_lora/llama3_lora_dpo.yaml ``` #### 多模态 DPO/ORPO/SimPO 训练 ```bash llamafactory-cli train examples/train_lora/qwen2vl_lora_dpo.yaml ``` #### 奖励模型训练 ```bash llamafactory-cli train examples/train_lora/llama3_lora_reward.yaml ``` #### PPO 训练 ```bash llamafactory-cli train examples/train_lora/llama3_lora_ppo.yaml ``` #### KTO 训练 ```bash llamafactory-cli train examples/train_lora/llama3_lora_kto.yaml ``` #### 预处理数据集 对于大数据集有帮助,在配置中使用 `tokenized_path` 以加载预处理后的数据集。 ```bash llamafactory-cli train examples/train_lora/llama3_preprocess.yaml ``` #### 在 MMLU/CMMLU/C-Eval 上评估 ```bash llamafactory-cli eval examples/train_lora/llama3_lora_eval.yaml ``` #### 多机指令监督微调 ```bash FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft.yaml ``` #### 使用 DeepSpeed ZeRO-3 平均分配显存 ```bash FORCE_TORCHRUN=1 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml ``` ### QLoRA 微调 #### 基于 4/8 比特 Bitsandbytes/HQQ/EETQ 量化进行指令监督微调(推荐) ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_otfq.yaml ``` #### 基于 4/8 比特 GPTQ 量化进行指令监督微调 ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_gptq.yaml ``` #### 基于 4 比特 AWQ 量化进行指令监督微调 ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_awq.yaml ``` #### 基于 2 比特 AQLM 量化进行指令监督微调 ```bash llamafactory-cli train examples/train_qlora/llama3_lora_sft_aqlm.yaml ``` ### 全参数微调 #### 在单机上进行指令监督微调 ```bash FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/llama3_full_sft.yaml ``` #### 在多机上进行指令监督微调 ```bash FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=0 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml FORCE_TORCHRUN=1 NNODES=2 NODE_RANK=1 MASTER_ADDR=192.168.0.1 MASTER_PORT=29500 llamafactory-cli train examples/train_full/llama3_full_sft.yaml ``` #### 多模态指令监督微调 ```bash FORCE_TORCHRUN=1 llamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml ``` ### 合并 LoRA 适配器与模型量化 #### 合并 LoRA 适配器 注:请勿使用量化后的模型或 `quantization_bit` 参数来合并 LoRA 适配器。 ```bash llamafactory-cli export examples/merge_lora/llama3_lora_sft.yaml ``` #### 使用 AutoGPTQ 量化模型 ```bash llamafactory-cli export examples/merge_lora/llama3_gptq.yaml ``` ### 推理 LoRA 模型 #### 使用 vLLM+TP 批量推理 ``` python scripts/vllm_infer.py --model_name_or_path path_to_merged_model --dataset alpaca_en_demo ``` #### 使用命令行对话框 ```bash llamafactory-cli chat examples/inference/llama3_lora_sft.yaml ``` #### 使用浏览器对话框 ```bash llamafactory-cli webchat examples/inference/llama3_lora_sft.yaml ``` #### 启动 OpenAI 风格 API ```bash llamafactory-cli api examples/inference/llama3_lora_sft.yaml ``` ### 杂项 #### 使用 GaLore 进行全参数训练 ```bash llamafactory-cli train examples/extras/galore/llama3_full_sft.yaml ``` #### 使用 BAdam 进行全参数训练 ```bash llamafactory-cli train examples/extras/badam/llama3_full_sft.yaml ``` #### 使用 Adam-mini 进行全参数训练 ```bash llamafactory-cli train examples/extras/adam_mini/qwen2_full_sft.yaml ``` #### LoRA+ 微调 ```bash llamafactory-cli train examples/extras/loraplus/llama3_lora_sft.yaml ``` #### PiSSA 微调 ```bash llamafactory-cli train examples/extras/pissa/llama3_lora_sft.yaml ``` #### 深度混合微调 ```bash llamafactory-cli train examples/extras/mod/llama3_full_sft.yaml ``` #### LLaMA-Pro 微调 ```bash bash examples/extras/llama_pro/expand.sh llamafactory-cli train examples/extras/llama_pro/llama3_freeze_sft.yaml ``` #### FSDP+QLoRA 微调 ```bash bash examples/extras/fsdp_qlora/train.sh ``` #### 计算 BLEU 和 ROUGE 分数 ```bash llamafactory-cli train examples/extras/nlg_eval/llama3_lora_predict.yaml ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/train/LLaMA-Factory/examples/README_zh.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/train/LLaMA-Factory/examples/README_zh.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 4513 }
# Detached Worker ## How to run (Only on a single node) - Start a local ray cluster: ```bash ray start --head --port=6379 ``` - Run the server ```bash python3 server.py ``` - On another terminal, Run the client ```bash python3 client.py ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/tests/ray/detached_worker/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/tests/ray/detached_worker/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 241 }
# Dataset Format ## RLHF dataset We combine all the data sources into a single parquet files. We directly organize the prompt into the chat format so that multi-turn chats can be easily incorporated. In the prompt, we may add instruction following texts to guide the model output the answers in a particular format so that we can extract the answers. Math problems ```json { "data_source": "openai/gsm8k", "prompt": [{"role": "user", "content": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May? Let's think step by step and output the final answer after \"####\""}], "ability": "math", "reward_model": { "style": "rule", "ground_truth": ["72"] }, } ```
{ "source": "NovaSky-AI/SkyThought", "title": "skythought/skythought-rl/verl/utils/dataset/README.md", "url": "https://github.com/NovaSky-AI/SkyThought/blob/main/skythought/skythought-rl/verl/utils/dataset/README.md", "date": "2025-01-09T21:37:37", "stars": 2978, "description": "Sky-T1: Train your own O1 preview model within $450", "file_size": 796 }
<div align="center"> # Simple Reinforcement Learning for Reasoning [![Notion](https://img.shields.io/badge/Notion-%23000000.svg?style=for-the-badge&logo=notion&logoColor=white)](https://hkust-nlp.notion.site/simplerl-reason) [![Hugging Face](https://img.shields.io/badge/SimpleRL-fcd022?style=for-the-badge&logo=Huggingface&logoColor=000)](https://huggingface.co/collections/hkust-nlp/simplerl-67b543892b2ec6908ffff710) </div> This repo contains a simple reinforcement learning recipe to improve models' reasoning abilities. It is simple because only rule-based reward is used, the recipe is almost the same as the one used in [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1), except that the code currently uses PPO rather than GRPO. We have used this code to train small models (7B) on limited data (8K examples), achieving surprisingly strong results -- for example, starting from Qwen2.5-Math-7B (base model), we perform RL on it directly. No SFT, no reward model, just 8K MATH examples for verification, the resultant model achieves (pass@1) 33.3% on AIME, 62.5% on AMC, and 77.2% on MATH, outperforming Qwen2.5-math-7B-instruct and being comparable to previous baselines that use >50x more data and more complicated components. You may check our Notion blog or the Introduction below for more details. <div align="center"> <img src="https://github.com/user-attachments/assets/bacd1680-ccb0-4921-a687-8a595ebf5896" width="700" alt="simplelr-reaoning-intro-figure_00"> </div> > Training dynamics of our Qwen2.5-SimpleRL-Zero training starting from the Qwen2.5-Math-7B, without SFT or reward models. ## News - **[2025/02/19]** We release checkpoints of [Qwen-2.5-Math-7B-SimpleRL-Zero](https://huggingface.co/hkust-nlp/Qwen-2.5-Math-7B-SimpleRL-Zero) and [Qwen-2.5-Math-7B-SimpleRL](https://huggingface.co/hkust-nlp/Qwen-2.5-Math-7B-SimpleRL) to Huggingface. - **[2025/01/25]** We release the training/eval code and our blog. We are working on the paper and will release it very soon. ## Introduction Many researchers are exploring possible paths towards learning o-style models, such as distillation, MCTS, process-based reward models, and reinforcement learning. Recently, both [DeepSeek-R1](https://github.com/deepseek-ai/DeepSeek-R1) and [Kimi-k1.5](https://github.com/MoonshotAI/Kimi-k1.5) demonstrate an extremely simple recipe on this path, using simple RL algorithms to learn emerging long CoT and self-reflection patterns and leading to strong results, where no MCTS and reward models are used. However, their experiments are based on huge models in a large-scale RL setting. It remains unknown whether small models can demonstrate similar behaviors, how much data is needed, and how would the quantitative results compare with other approaches. We reproduce the training of DeepSeek-R1-Zero and DeepSeek-R1 for complex mathematical reasoning, starting from Qwen-2.5-Math-7B (base model), and only using 8K (query, final answer) examples from the original MATH dataset. We are surprised how far the 8K MATH examples lift this 7B base model without any other external signals: ***All results are in pass@1 accuracy*** | | AIME 2024 | MATH 500 | AMC | Minerva Math | OlympiadBench | Avg. | |---------------------------------|-----------|----------|------|--------------|---------------|-------| | Qwen2.5-Math-7B-Base | 16.7 | 52.4 | 52.5 | 12.9 | 16.4 | 30.2 | | Qwen2.5-Math-7B-Base + 8K MATH SFT | 3.3 | 54.6 | 22.5 | 32.7 | 19.6 | 26.5 | | Qwen-2.5-Math-7B-Instruct | 13.3 | 79.8 | 50.6 | 34.6 | 40.7 | 43.8 | | Llama-3.1-70B-Instruct | 16.73 | 64.6 | 30.1 | 35.3 | 31.9 | 35.7 | | rStar-Math-7B | 26.7 | 78.4 | 47.5 | - | 47.1 | - | | Eurus-2-7B-PRIME | 26.7 | 79.2 | 57.8 | 38.6 | 42.1 | 48.9 | | Qwen2.5-7B-SimpleRL-Zero | 33.3 | 77.2 | 62.5 | 33.5 | 37.6 | 48.8 | | Qwen2.5-7B-SimpleRL | 26.7 | 82.4 | 62.5 | 39.7 | 43.3 | 50.9 | Qwen2.5-7B-SimpleRL-Zero is the simple RL training from the base model directly, using only 8K MATH examples. It achieves gains of nearly 20 absolute points on average compared to the base model. Moreover, it outperforms Qwen-2.5-Math-7B-Instruct on average, and is roughly comparable to the recently released [Eurus-2-7B-PRIME](https://github.com/PRIME-RL/PRIME) and [rStar-Math-7B](https://arxiv.org/abs/2501.04519) which are also based on Qwen-2.5-Math-7B. These baselines contain much more complicated components such as reward models and use at least 50x more and advanced data: ***Data comparison of different approaches*** | | Qwen2.5-Math-7B-Instruct | rStar-Math-7B | Eurus-2-7B-PRIME | Qwen2.5-7B-SimpleRL-Zero | |---------------------------|--------------------------|---------------|------------------|--------------------------| | **Base Model** | Qwen2.5-Math-7B | Qwen2.5-Math-7B | Qwen2.5-Math-7B | Qwen2.5-Math-7B | | **SFT Data** | 2.5M (open-source and in-house) | ~7.3M (MATH, NuminaMath, etc.) | 230K | 0 | | **RM Data** | 618K (in-house) | ~7k (in-house) | 0 | 0 | | **RM** | Qwen2.5-Math-RM (72B) | None | Eurus-2-7B-SFT | None | | **RL Data** | 66K queries × 32 samples | ~3.647M × 16 | 150K queries × 4 samples | 8K queries × 8 samples | We are both excited and surprised by the significant gains achieved using only 8K MATH examples. Notably, while the MATH queries are considerably easier than many challenging benchmarks such as AIME and AMC, this simple RL recipe demonstrates remarkable generalization, with performance increasing by at least 10 absolute points compared to the base model. This easy-to-hard generalization effect is something we could not have envisioned with standard SFT training on the same dataset. We fully open-source our training code and details, hopefully as a strong baseline setup for the community to further explore the potential of RL for reasoning. ## Quick Start ### Installation Our code is implemented based on OpenRLHF. Please follow [OpenRLHF's guidance](https://github.com/OpenRLHF/OpenRLHF/tree/main?tab=readme-ov-file#installation) to configure required environments and install our version: ```bash git clone https://github.com/hkust-nlp/simpleRL-reason.git cd train pip install -e . ``` ### Reproducing SimpleRL-Zero The minimum hardware requirement for training is 6 H/A100-80G GPUs (note: this configuration has not been tested yet). To accelerate our experiments, we used 4 nodes, each equipped with 8 H/A100-80G GPUs, to train on 8K MATH examples for 120 steps over approximately 1.5 days, achieving convergence. However, our results indicate that satisfactory performance can be achieved with around 60 steps, which requires less than one day of training using 4 nodes. The training process leverages PPO with Ray and vLLM for acceleration. So firstly, you need to launch the ray cluster using the command below: ```bash # launch the master node of ray in container ray start --head --node-ip-address 0.0.0.0 --num-gpus 8 # if you want to launch ray on more nodes, use ray start --address {MASTER-NODE-ADDRESS}:6379 --num-gpus 8 ``` Next, submit the training job from the master node: ```bash cd train # For 4 nodes: ray job submit --address="http://127.0.0.1:8265" \ --runtime-env-json='{ "pip": ["ray==2.12.0", "latex2sympy2", "timeout_decorator"] }' -- /bin/bash examples/script/train_ppo_qwen_base_math_lv35_new.sh # For 1 node: ray job submit --address="http://127.0.0.1:8265" \ --runtime-env-json='{ "pip": ["ray==2.12.0", "latex2sympy2", "timeout_decorator"] }' -- /bin/bash examples/script/train_ppo_qwen_base_math_lv35_1_node.sh ``` ### Reproducing SimpleRL Comming Soon! ### Evaluate We used [Qwen Math's codebase](https://github.com/QwenLM/Qwen2.5-Math/tree/main/evaluation) for evaluation, but for fairness considerations, we completely prohibited solving problems by calling code. Please follow the `/eval` instructions for evaluation ## Citation If you find this blog or our code useful, we would appreciate it if you could cite our work: ```bibtex @misc{zeng2025simplerl, title={7B Model and 8K Examples: Emerging Reasoning with Reinforcement Learning is Both Effective and Efficient}, author={Weihao Zeng and Yuzhen Huang and Wei Liu and Keqing He and Qian Liu and Zejun Ma and Junxian He}, year={2025}, howpublished={\url{https://hkust-nlp.notion.site/simplerl-reason}}, note={Notion Blog} } ``` ## Acknowledgement We implement our reinforcement learning algorithm extending from [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF). We utilize [vLLM](https://github.com/vllm-project/vllm) for inference and develop evaluation scripts based on [Qwen2.5-Math](https://github.com/QwenLM/Qwen2.5-Math/tree/main/evaluation). Particularly, we thank the developers of DeepSeek-R1 and Kimi-k1.5 for their innovation and contribution to the open-source community. ## Star History [![Star History Chart](https://api.star-history.com/svg?repos=hkust-nlp/simpleRL-reason&type=Date)](https://star-history.com/#hkust-nlp/simpleRL-reason&Date)
{ "source": "hkust-nlp/simpleRL-reason", "title": "README.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/README.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 9490 }
### Requirements You can install the required packages with the following command: ```bash cd latex2sympy pip install -e . cd .. pip install -r requirements.txt pip install vllm==0.5.1 --no-build-isolation pip install transformers==4.42.3 ``` ### Evaluation You can evaluate Qwen2.5/Qwen2-Math-Instruct series model with the following command: ```bash # Qwen2.5-Math-Instruct Series PROMPT_TYPE="qwen25-math-cot" # Qwen2.5-Math-1.5B-Instruct export CUDA_VISIBLE_DEVICES="0" MODEL_NAME_OR_PATH="Qwen/Qwen2.5-Math-1.5B-Instruct" OUTPUT_DIR="Qwen2.5-Math-1.5B-Instruct-Math-Eval" bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH $OUTPUT_DIR # Qwen2.5-Math-7B-Instruct export CUDA_VISIBLE_DEVICES="0" MODEL_NAME_OR_PATH="Qwen/Qwen2.5-Math-7B-Instruct" OUTPUT_DIR="Qwen2.5-Math-7B-Instruct-Math-Eval" bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH $OUTPUT_DIR # Qwen2.5-Math-72B-Instruct export CUDA_VISIBLE_DEVICES="0,1,2,3" MODEL_NAME_OR_PATH="Qwen/Qwen2.5-Math-72B-Instruct" OUTPUT_DIR="Qwen2.5-Math-72B-Instruct-Math-Eval" bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH $OUTPUT_DIR # Qwen2-Math-Instruct Series PROMPT_TYPE="qwen-boxed" # Qwen2-Math-1.5B-Instruct export CUDA_VISIBLE_DEVICES="0" MODEL_NAME_OR_PATH="Qwen/Qwen2-Math-1.5B-Instruct" OUTPUT_DIR="Qwen2-Math-1.5B-Instruct-Math-Eval" bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH $OUTPUT_DIR # Qwen2-Math-7B-Instruct export CUDA_VISIBLE_DEVICES="0" MODEL_NAME_OR_PATH="Qwen/Qwen2-Math-7B-Instruct" OUTPUT_DIR="Qwen2-Math-7B-Instruct-Math-Eval" bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH $OUTPUT_DIR # Qwen2-Math-72B-Instruct export CUDA_VISIBLE_DEVICES="0,1,2,3" MODEL_NAME_OR_PATH="Qwen/Qwen2-Math-72B-Instruct" OUTPUT_DIR="Qwen2-Math-72B-Instruct-Math-Eval" bash sh/eval.sh $PROMPT_TYPE $MODEL_NAME_OR_PATH $OUTPUT_DIR ``` ## Acknowledgement The codebase is adapted from [math-evaluation-harness](https://github.com/ZubinGou/math-evaluation-harness).
{ "source": "hkust-nlp/simpleRL-reason", "title": "eval/README.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/eval/README.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 1934 }
<div align="center"> <img alt="OpenRLHF logo" src="./docs/logo.png" style="height: 140px;" /> </div> <div align="center"> <p align="center"> <a href="https://github.com/OpenRLHF/OpenRLHF/graphs/contributors"> <img alt="GitHub Contributors" src="https://img.shields.io/github/contributors/OpenRLHF/OpenRLHF" /> </a> <a href="https://github.com/OpenRLHF/OpenRLHF/issues"> <img alt="Issues" src="https://img.shields.io/github/issues/OpenRLHF/OpenRLHF?color=0088ff" /> </a> <a href="https://github.com/OpenRLHF/OpenRLHF/discussions"> <img alt="Issues" src="https://img.shields.io/github/discussions/OpenRLHF/OpenRLHF?color=0088ff" /> </a> <a href="https://github.com/OpenRLHF/OpenRLHF/pulls"> <img alt="GitHub pull requests" src="https://img.shields.io/github/issues-pr/OpenRLHF/OpenRLHF?color=0088ff" /> <a href="https://github.com/OpenRLHF/OpenRLHF/stargazers"> <img alt="GitHub stars" src="https://img.shields.io/github/stars/OpenRLHF/OpenRLHF?color=ccf" /> </a> <br> <em>Open-source / Comprehensive / Lightweight / Easy-to-use</em> </p> </p> </div> <hr> <span>[ English | <a href="README_zh.md">中文</a> ]</span> OpenRLHF is a high-performance RLHF framework built on Ray, DeepSpeed and HF Transformers: - **Simple and easy to use**: OpenRLHF is one of the simplest high-performance RLHF libraries currently available, and seamlessly compatible with Huggingface models and datasets. - **High performance**: RLHF training spends 80% of the time on the sample generation stage. Thanks to the ability to use a large inference batch size with Ray and Packing Samples and vLLM generation acceleration, the performance of OpenRLHF 3~4x+ that of Optimized DeepSpeedChat with Hybrid Engine. - **Distributed RLHF**: OpenRLHF distribute the Actor, Reward, Reference, and Critic models onto separate GPUs using Ray, while placing the Adam optimizer on the CPU. This enables full-scale fine-tuning of 70B+ models with multiple A100 80G GPUs and vLLM and 7B models across multiple 24GB RTX 4090 GPUs. - **PPO Implementation Optimization**: We integrated the implementation tricks for PPO to improve the training stability, referencing [Zhihu](https://zhuanlan.zhihu.com/p/622134699) and the [Notion blog](https://difficult-link-dd7.notion.site/eb7b2d1891f44b3a84e7396d19d39e6f?v=01bcb084210149488d730064cbabc99f). More details are in [Slides](https://docs.google.com/presentation/d/1JRhB1d7csofx0PIZBmfyBdMluxNd5JLPpUHrrvVhGnk/edit?usp=sharing) | [Technical Report](https://arxiv.org/abs/2405.11143) | [Documents](https://openrlhf.readthedocs.io/) ## Features - Distributed [PPO](./examples/scripts/train_ppo_llama_ray.sh)/[Reinforce](./examples/scripts/train_reinforce_llama_ray.sh) based on Ray. - Support full RLHF fine-tuning of models with [over 70 billion parameters](./examples/scripts/train_ppo_llama_ray_70b.sh). - Support vLLM generation acceleration in RLHF (--vllm_num_engines). - Support multiple reward models (--reward_pretrain model1,model2...) and remote reward model(--remote_rm_url). - Support [DPO (direct-preference-optimization)/IPO/cDPO](./examples/scripts/train_dpo_llama.sh). - Support [Kahneman-Tversky optimization (KTO)](./examples/scripts/train_kto_llama.sh). - Support [Rejection Sampling](./examples/scripts/train_rejection_sampling_llama.sh). - Support [Iterative DPO](./examples/scripts/train_iterative_dpo_llama.sh) (https://github.com/RLHFlow/Online-RLHF). - Support [Conditional SFT](./examples/scripts/train_conditional_llama.sh) (https://arxiv.org/abs/2308.12050). - Support [Knowledge Distillation](./examples/scripts/train_knowledge_distillation.sh) (https://github.com/microsoft/LMOps/tree/main/minillm). - Support [Process Reward Model (PRM)](./examples/scripts/train_prm_mistral.sh). - Support SFT/DPO/RM/PRM/PPO training samples packing (--packing_samples). - Support [RingAttention](./examples/scripts/train_dpo_ring_llama.sh) (--ring_attn_size, --ring_head_stride) - Support [MoE](./examples/test_scripts/train_sft_mixtral_lora.sh) (--aux_loss_coef) - Support FlashAttention2 (--flash_attn). - Support QLoRA (--load_in_4bit), [LoRA (--lora_rank, --target_modules)](./examples/scripts/train_sft_mixtral_lora.sh). - Support HuggingFace `tokenizer.apply_chat_template` in datasets (--apply_chat_template and --input_key). - Support Wandb log (--use_wandb) and tensorboard (--use_tensorboard). - Support for recovering from checkpoint (--load_checkpoint and --save_steps). - Multi-nodes [training scripts](./examples/scripts/train_llama_slurm.sh) for Slurm. ### PPO Support Matrix | Feature | OpenRLHF | DSChat | CAIChat | TRL | | ------------- |:-------------:| :-------------:| :-------------:| :-------------:| | 70B+ Full Tuning with 16 A100-80GB | ✅ | ❌ | ❌ | ❌ | | 7B Full Tuning with 4 RTX4090 | ✅ | ❌ | ❌ | ❌ | | 34B DPO Full Tuning with 8 A100-80GB | ✅ | ❌ | ❌ | ❌ | | Inference Engine in PPO | ✅ | ✅ | ❌ | ❌ | | PPO Implementation Tricks | ✅ | ❌ | ❌ | ✅ | | Support QLoRA | ✅ | ❌ | ❌ | ✅ | | Support Mixtral 8*7b | ✅ | ❌ | ❌ | ❌ | | Support Unmerged Actor-Critic | ✅ | ✅ | ✅ | ❌ | | Support Multiple Reward Models | ✅ | ❌ | ❌ | ❌ | | Support Huggingface Models | ✅ | ✅ | ✅ | ✅ | | Easy-to-use | ✅ | ❌ (HybridEngine bugs) | ✅ | ✅ | ## Quick Start ### Installation To use OpenRLHF, first launch the docker container (**Recommended**) and `pip install` openrlhf inside the docker container: ```bash # Launch the docker container docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v $PWD:/openrlhf nvcr.io/nvidia/pytorch:24.02-py3 bash sudo pip uninstall xgboost transformer_engine flash_attn -y # pip install pip install openrlhf # If you want to use vLLM acceleration (To install vLLM 0.4.2) pip install openrlhf[vllm] # latest vLLM is also supported (Please use `--vllm_sync_backend gloo` or `export NCCL_P2P_DISABLE=1`) pip install openrlhf[vllm_latest] # pip install the latest version pip install git+https://github.com/OpenRLHF/OpenRLHF.git # Or git clone git clone https://github.com/OpenRLHF/OpenRLHF.git cd OpenRLHF pip install -e . ``` > [!NOTE] >We recommend using vLLM 0.4.2, as the 0.4.3+ versions currently require synchronizing weights via Gloo (`--vllm_sync_backend gloo`) or disabling P2P communication (`export NCCL_P2P_DISABLE=1`). >We also provided the [Dockerfiles for vLLM](./dockerfile/) and [One-Click Installation Script of Nvidia-Docker](./examples/scripts/nvidia_docker_install.sh). ### Prepare Datasets OpenRLHF provides multiple data processing methods in our dataset classes. Such as in the [Prompt Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/prompts_dataset.py#L6): ```python def preprocess_data(data, input_template=None, input_key="input", apply_chat_template=None) -> str: if apply_chat_template: prompt = apply_chat_template(data[input_key], tokenize=False, add_generation_prompt=True) else: prompt = data[input_key] if input_template: prompt = input_template.format(prompt) return prompt ``` - We can use `--input_key` to specify the `JSON key name` of the input datasets `--prompt_data {name or path}` (PPO) or `--dataset {name or path}`, and use `--apply_chat_template` to utilize the `chat_template` from the [Huggingface Tokenizer](https://huggingface.co/docs/transformers/main/en/chat_templating). - If you don't want to use `--apply_chat_template`, you can use `--input_template` instead, or preprocess the datasets offline in advance. - OpenRLHF also support mixing multiple datasets using `--prompt_data_probs 0.1,0.4,0.5` (PPO) or `--dataset_probs 0.1,0.4,0.5`. How Chat Templating Works: ```python dataset = [{"input_key": [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ]}] tokenizer.apply_chat_template(dataset[0]["input_key"], tokenize=False) "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]" ``` How to specify training and test datasets ? You can specify it using the `data_type@data_dir` format. For example, the dataset can be set as `--dataset json@./data`. ``` data ├── test.jsonl └── train.jsonl ``` > [!NOTE] > By default, we use `train` and `test` as splits to distinguish training and testing datasets from Huggingface. > The ``JSON key`` options depends on the specific datasets. See [Reward Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/reward_dataset.py#L10) and [SFT Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/sft_dataset.py#L9) ### Supervised Fine-tuning OpenRLHF's model checkpoint is fully compatible with HuggingFace models. You can specify the model name or path using `--pretrain {name or path}`, `--reward_pretrain {name or path}` and `--critic_pretrain {name or path}`. We have provided some pre-trained checkpoints and datasets on [HuggingFace OpenRLHF](https://huggingface.co/OpenRLHF). Then you can use the startup scripts we provide in the [examples/scripts](./examples/scripts/) directory, or start the training using the following commands. ```bash deepspeed --module openrlhf.cli.train_sft \ --max_len 4096 \ --dataset Open-Orca/OpenOrca \ --input_key question \ --output_key response \ --input_template 'User: {}\nAssistant: ' \ --train_batch_size 256 \ --micro_train_batch_size 2 \ --max_samples 500000 \ --pretrain meta-llama/Meta-Llama-3-8B \ --save_path ./checkpoint/llama3-8b-sft \ --save_steps -1 \ --logging_steps 1 \ --eval_steps -1 \ --zero_stage 2 \ --max_epochs 1 \ --packing_samples \ --bf16 \ --flash_attn \ --learning_rate 5e-6 \ --gradient_checkpointing \ --use_wandb {wandb_token} # Support HF tokenizer.apply_chat_template # --apply_chat_template # --input_key {JSON Key} # --tokenizer_chat_template {HF Chat Template} # Can also be used for continued pre-training # --pretrain_mode ``` > [!NOTE] > OpenRLHF SFT/DPO/RewardModel/PPO trainers support `--packing_samples` [based on `--flash_attn`](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing) ### Reward Model Training ```bash deepspeed --module openrlhf.cli.train_rm \ --save_path ./checkpoint/llama3-8b-rm \ --save_steps -1 \ --logging_steps 1 \ --eval_steps -1 \ --train_batch_size 256 \ --micro_train_batch_size 1 \ --pretrain OpenRLHF/Llama-3-8b-sft-mixture \ --bf16 \ --max_epochs 1 \ --max_len 8192 \ --zero_stage 3 \ --learning_rate 9e-6 \ --dataset OpenRLHF/preference_dataset_mixture2_and_safe_pku \ --apply_chat_template \ --chosen_key chosen \ --rejected_key rejected \ --flash_attn \ --packing_samples \ --gradient_checkpointing \ --use_wandb {wandb_token} ``` ### PPO without Ray ```bash deepspeed --module openrlhf.cli.train_ppo \ --pretrain OpenRLHF/Llama-3-8b-sft-mixture \ --reward_pretrain OpenRLHF/Llama-3-8b-rm-mixture \ --save_path ./checkpoint/llama-3-8b-rlhf \ --save_steps -1 \ --logging_steps 1 \ --eval_steps -1 \ --micro_train_batch_size 2 \ --train_batch_size 128 \ --micro_rollout_batch_size 4 \ --rollout_batch_size 1024 \ --max_epochs 1 \ --prompt_max_len 1024 \ --generate_max_len 1024 \ --zero_stage 2 \ --bf16 \ --actor_learning_rate 5e-7 \ --critic_learning_rate 9e-6 \ --init_kl_coef 0.01 \ --prompt_data OpenRLHF/prompt-collection-v0.1 \ --input_key context_messages \ --apply_chat_template \ --max_samples 100000 \ --normalize_reward \ --adam_offload \ --flash_attn \ --gradient_checkpointing \ --use_wandb {wandb_token} # Support remote reward model (HTTP) # --remote_rm_url http://localhost:5000/get_reward ``` ### PPO with Ray and vLLM To improve RLHF training speed or support 70B models, we can use the PPO with Ray and vLLM acceleration ```bash # launch the master node of ray in container ray start --head --node-ip-address 0.0.0.0 --num-gpus 8 # if you want to launch ray on more nodes, use ray start --address {MASTER-NODE-ADDRESS}:6379 --num-gpus 8 ray job submit --address="http://127.0.0.1:8265" \ --runtime-env-json='{"working_dir": "/openrlhf"}' \ -- python3 -m openrlhf.cli.train_ppo_ray \ --ref_num_nodes 1 \ --ref_num_gpus_per_node 2 \ --reward_num_nodes 1 \ --reward_num_gpus_per_node 2 \ --critic_num_nodes 1 \ --critic_num_gpus_per_node 2 \ --actor_num_nodes 1 \ --actor_num_gpus_per_node 2 \ --vllm_num_engines 2 \ --vllm_tensor_parallel_size 2 \ --colocate_critic_reward \ --colocate_actor_ref \ --pretrain OpenRLHF/Llama-3-8b-sft-mixture \ --reward_pretrain OpenRLHF/Llama-3-8b-rm-mixture \ --save_path /openrlhf/examples/checkpoint/llama3-8b-rlhf \ --micro_train_batch_size 8 \ --train_batch_size 128 \ --micro_rollout_batch_size 16 \ --rollout_batch_size 1024 \ --max_samples 100000 \ --max_epochs 1 \ --prompt_max_len 1024 \ --generate_max_len 1024 \ --zero_stage 3 \ --bf16 \ --actor_learning_rate 5e-7 \ --critic_learning_rate 9e-6 \ --init_kl_coef 0.01 \ --prompt_data OpenRLHF/prompt-collection-v0.1 \ --input_key context_messages \ --apply_chat_template \ --normalize_reward \ --packing_samples \ --adam_offload \ --flash_attn \ --gradient_checkpointing \ --use_wandb {wandb_token} # Support remote reward model (HTTP) # --remote_rm_url http://localhost:5000/get_reward ``` > [!NOTE] > Do not set `--vllm_num_engines` means not using the vLLM engine. > You can also use ``setup_commands`` to let Ray automatically deploy the environment, such as `--runtime-env-json='{"setup_commands": ["pip install openrlhf[vllm]"]}'`. The launch scripts and documents for supported algorithms are in [example/scripts](./examples/scripts/) and [Documents - Usage](https://openrlhf.readthedocs.io/en/latest/usage.html) ## Performance We optimized DSChat's performance to the greatest extent possible by employing techniques such as enabling Adam offload, along with reward model (RM) and reference model (Ref) offload to increase the micro-batch size during the inference stage and avoid out-of-memory issues. We even fixed some bugs in DSChat to enable the Hybrid Engine (HE) for LLaMA2. The average time (seconds) it took to train 1024 prompts with 1 PPO epoch using the Optimized DSChat and OpenRLHF: | **Size** | **NVIDIA A800-80GB GPUs** | **Optimized DSChat (with Hybrid Engine)** | **OpenRLHF** | **Speedup** | | :---: | :---: | :---: | :---: | :---: | | 7B | 16 | 855.09 | 471.11 | 1.82x | | 13B | 32 | 1528.93 | 608.93 | 2.5x | | 34B | 32 | 3634.98 | 1526.4 | 2.4x | | 70B | 32 | 10407.0 | 4488.53 | 2.3x | > [!NOTE] > The data is outdated; please refer to the performance tuning section for re-testing. ### Performance Tuning Guide To achieve optimal performance, we recommend allocating more nodes to the vLLM Engine. For example, for a 70B model with 32 A100 GPUs, it is advised to allocate more than 16 A100 GPUs to the vLLM Engine, 8 GPUs to the Actor model, and the remaining 8 GPUs to the Critic model. Additionally, enable the `--colocate_critic_reward`, `--colocate_actor_ref` options to merge nodes. Finally, you should increase the `rollout_micro_batch_size` (and minimize the TP size of vLLM engine) as much as possible. During the training phase, a larger `--micro_train_batch_size` is better and enable `--packing_samples`. Enable `enable_prefix_caching` in vLLM generation when `n_samples_per_prompt > 1`. When there are enough GPUs, please disable --adam_offload. ## Companies and Organizations using OpenRLHF - ByteDance - NexusFlow - Baidu - Jülich Supercomputing Centre (JSC) - Berkeley Starling Team - Tencent - Alibaba - Google - China Telecom - ... ## Join Us **How to Join?** 1. Email us at [email protected] or join [GitHub Organization](https://github.com/OpenRLHF). Please include the following details: - Your name - Your GitHub username - Your areas of interest - Your skills and experience related to NLP and/or AI 1. You can also join us through the official GitHub [OpenRLHF ↗](https://github.com/OpenRLHF/OpenRLHF) project page. Just create an issue about your interest to contribute and we will get back to you. **What can you do?** 1. Join the team and participate in the development of the OpenRLHF project. 1. Contribute to the project by submitting pull requests. 1. Help improve documentation, fix bugs, or create new features. 1. Share the project and help us grow the community. ## Sponsor Us Your sponsorship can help us maintain and improve OpenRLHF. If you find this project useful, please consider sponsoring us. You can sponsor us on [Open Collective ↗](https://opencollective.com/OpenRLHF). ## Starchart [![Star History Chart](https://api.star-history.com/svg?repos=OpenRLHF/OpenRLHF&type=Date)](https://star-history.com/#OpenRLHF/OpenRLHF&Date) ## Contributors A big thank you to all our contributors! If you want to contribute, feel free to make a pull request or create an issue. <a href="https://github.com/OpenRLHF/OpenRLHF/graphs/contributors"> <img src="https://contrib.rocks/image?repo=OpenRLHF/OpenRLHF" /> </a> ## References & Acknowledgements We would like to express our gratitude to the following projects and organizations for their contributions to the field of AI and NLP: - [Hugging Face Transformers ↗](https://github.com/huggingface/transformers) - [OpenAI GPT ↗](https://github.com/openai/gpt-3) - [LLaMA ↗](https://llama.meta.com/) - [DeepSpeed ↗](https://github.com/microsoft/DeepSpeed) - [Ray ↗](https://github.com/ray-project/ray) Our project would also like to thank [ColossalChat](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat) and [DeepSpeedChat](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat). In the early stages of the project, we referred to their code design. (2024/7) Our GitHub organization has changed from OpenLLMAI to OpenRLHF. ## Citation ``` @article{hu2024openrlhf, title={OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework}, author={Jian Hu and Xibin Wu and Weixun Wang and Xianyu and Dehao Zhang and Yu Cao}, journal={arXiv preprint arXiv:2405.11143}, year={2024} } ``` ______________________________________________________________________ *OpenRLHF © 2024 OpenRLHF. All Rights Reserved.*
{ "source": "hkust-nlp/simpleRL-reason", "title": "train/README.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/train/README.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 18640 }
<div align="center"> <p align="center"> <img alt="" src="./docs/logo.png" style="display: inline-block; height: 140px" /> </p> </div> <div align="center"> <p align="center"> <a href="https://github.com/OpenRLHF/OpenRLHF/graphs/contributors"> <img alt="GitHub Contributors" src="https://img.shields.io/github/contributors/OpenRLHF/OpenRLHF" /> </a> <a href="https://github.com/OpenRLHF/OpenRLHF/issues"> <img alt="Issues" src="https://img.shields.io/github/issues/OpenRLHF/OpenRLHF?color=0088ff" /> </a> <a href="https://github.com/OpenRLHF/OpenRLHF/discussions"> <img alt="Issues" src="https://img.shields.io/github/discussions/OpenRLHF/OpenRLHF?color=0088ff" /> </a> <a href="https://github.com/OpenRLHF/OpenRLHF/pulls"> <img alt="GitHub pull requests" src="https://img.shields.io/github/issues-pr/OpenRLHF/OpenRLHF?color=0088ff" /> <a href="https://github.com/OpenRLHF/OpenRLHF/stargazers"> <img alt="GitHub stars" src="https://img.shields.io/github/stars/OpenRLHF/OpenRLHF?color=ccf" /> </a> <br> <em>开源 / 全面 / 轻量级 / 易用</em> </p> </p> </div> <hr> <span>[ <a href="README.md">English</a> | 中文 ]</span> OpenRLHF 是一个基于 Ray、DeepSpeed 和 HF Transformers 构建的高性能 RLHF 框架: - **简单易用**: OpenRLHF 是目前可用的最简单的高性能 RLHF 库之一,无缝兼容 Huggingface 模型和数据集。 - **高性能**: RLHF 训练中 80% 的时间用于样本生成阶段。得益于使用 Ray, Packing Samples 以及 vLLM 生成加速的能力,OpenRLHF 的性能是极致优化的 DeepSpeedChat with Hybrid Engine 的3~4倍以上。 - **分布式 RLHF**: OpenRLHF 使用 Ray 将 Actor、Reward、Reference 和 Critic 模型分布到不同的 GPU 上,同时将 Adam 优化器放在 CPU 上。这使得使用多个 A100 80G GPU 和 vLLM 可以全面微调超过 70B+ 的模型 以及在多个 24GB RTX 4090 GPU 上微调 7B 模型。 - **PPO 实现技巧**: 我们集成了 PPO 的实现技巧以提高训练稳定性,详情参考 [知乎](https://zhuanlan.zhihu.com/p/622134699) 和 [Notion blog](https://difficult-link-dd7.notion.site/eb7b2d1891f44b3a84e7396d19d39e6f?v=01bcb084210149488d730064cbabc99f). 更多细节请参考 [PPT](https://docs.google.com/presentation/d/1JRhB1d7csofx0PIZBmfyBdMluxNd5JLPpUHrrvVhGnk/edit?usp=sharing) | [技术报告](https://arxiv.org/abs/2405.11143) | [使用文档](https://openrlhf.readthedocs.io/) ## 特性 - 基于 Ray 的分布式 [PPO](./examples/scripts/train_ppo_llama_ray.sh)/[Reinforce](./examples/scripts/train_reinforce_llama_ray.sh) based on Ray. - 支持使用 [超过 70 亿参数的模型进行全面 RLHF 微调](./examples/scripts/train_ppo_llama_ray_70b.sh). - 支持 vLLM 生成加速在 RLHF 中 (--vllm_num_engines). - 支持 多个奖励模型 (--reward_pretrain model1,model2...) 及 远程奖励模型 (--remote_rm_url). - 支持 [DPO (直接偏好优化)/IPO/cDPO](./examples/scripts/train_dpo_llama.sh). - 支持 [Kahneman-Tversky 优化 (KTO)](./examples/scripts/train_kto_llama.sh). - 支持 [拒绝采样](./examples/scripts/train_rejection_sampling_llama.sh). - 支持 [Iterative DPO](./examples/scripts/train_iterative_dpo_llama.sh) (https://github.com/RLHFlow/Online-RLHF). - 支持 [条件 SFT](./examples/scripts/train_conditional_llama.sh) (https://arxiv.org/abs/2308.12050). - 支持 [知识蒸馏](./examples/scripts/train_knowledge_distillation.sh) (https://github.com/microsoft/LMOps/tree/main/minillm). - 支持 [过程 Reward Model (PRM)](./examples/scripts/train_prm_mistral.sh). - 支持 SFT/DPO/RM/PRM/PPO 训练样本打包 (--packing_samples). - 支持 [RingAttention](./examples/scripts/train_dpo_ring_llama.sh) (--ring_attn_size, --ring_head_stride) - 支持 [MoE](./examples/test_scripts/train_sft_mixtral_lora.sh) (--aux_loss_coef) - 支持 FlashAttention2 (--flash_attn). - 支持 QLoRA (--load_in_4bit), [LoRA (--lora_rank, --target_modules)]((./examples/scripts/train_sft_mixtral_lora.sh)). - 支持 HuggingFace `tokenizer.apply_chat_template` 用于数据集处理 (--apply_chat_template 和 --input_key). - 支持 Wandb 日志 (--use_wandb) 和 tensorboard (--use_tensorboard). - 支持 checkpoint 断点训练 (--load_checkpoint 和 --save_steps). - 多节点 [训练脚本](./examples/scripts/train_llama_slurm.sh) 适用于 Slurm. ### PPO 支持矩阵 | 特性 | OpenRLHF | DSChat | CAIChat | TRL | | ------------- |:-------------:| :-------------:| :-------------:| :-------------:| | 使用 16 个 A100 完成 70B+ 全微调 | ✅ | ❌ | ❌ | ❌ || | 使用 4 个 RTX4090 完成 7B 全微调 | ✅ | ❌ | ❌ | ❌ | | 使用 8 个 A100 完成 34B DPO 全微调 | ✅ | ❌ | ❌ | ❌ | | 支持推理引擎加速 | ✅ | ✅ | ❌ | ❌ | | PPO 实现技巧 | ✅ | ❌ | ❌ | ✅ | | 支持 QLoRA | ✅ | ❌ | ❌ | ✅ | | 支持 Mixtral 8*7b | ✅ | ❌ | ❌ | ❌ | | 支持未合并的 Actor-Critic | ✅ | ✅ | ✅ | ❌ | | 支持多个奖励模型 | ✅ | ❌ | ❌ | ❌ | | 支持 Huggingface 模型 | ✅ | ✅ | ✅ | ✅ | | 易于使用 | ✅ | ❌ (HybridEngine bugs) | ✅ | ✅ | ## 快速开始 ### 安装 要使用 OpenRLHF,首先启动 Docker 容器(**推荐**)然后执行 `pip install` 安装 `openrlhf`: ```bash # 启动 docker container docker run --runtime=nvidia -it --rm --shm-size="10g" --cap-add=SYS_ADMIN -v $PWD:/openrlhf nvcr.io/nvidia/pytorch:24.02-py3 bash sudo pip uninstall xgboost transformer_engine flash_attn -y # pip install pip install openrlhf # 如果你需要使用 vLLM 加速 (安装 vLLM 0.4.2) pip install openrlhf[vllm] # 最新的 vLLM 也是支持的 (请使用 `--vllm_sync_backend gloo` 或者 `export NCCL_P2P_DISABLE=1`) pip install openrlhf[vllm_latest] # pip install GitHub 上的最新版 pip install git+https://github.com/OpenRLHF/OpenRLHF.git # 或者 git clone git clone https://github.com/OpenRLHF/OpenRLHF.git cd OpenRLHF pip install -e . ``` > [!NOTE] >我们推荐使用 vLLM 0.4.2,因为 0.4.3+ 版本目前需要通过 Gloo 进行权重同步(`--vllm_sync_backend gloo`)或者关闭 P2P 通信 `export NCCL_P2P_DISABLE=1`。 >我们也提供了 [Dockerfiles for vLLM](./dockerfile/) 和 [Nvidia-Docker 一键安装脚本](./examples/scripts/nvidia_docker_install.sh)。 ### 准备数据集 OpenRLHF 在其数据集类中提供了多种数据处理方法。 例如在 [Prompt Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/prompts_dataset.py#L6) 中: ```python def preprocess_data(data, input_template=None, input_key="input", apply_chat_template=None) -> str: if apply_chat_template: prompt = apply_chat_template(data[input_key], tokenize=False, add_generation_prompt=True) else: prompt = data[input_key] if input_template: prompt = input_template.format(prompt) return prompt ``` - 我们可以使用 `--input_key` 指定 `JSON key name` 为输入数据集 `--prompt_data {name or path}` (PPO) 或 `--dataset {name or path}`,并使用 `--apply_chat_template` 利用 [Huggingface Tokenizer](https://huggingface.co/docs/transformers/main/en/chat_templating) 中的 `chat_template`。 - 如果不想使用 `--apply_chat_template`,可以改用 `--input_template`,或预先离线处理数据集。 - OpenRLHF 还支持使用 `--prompt_data_probs 0.1,0.4,0.5` (PPO) 或 `--dataset_probs 0.1,0.4,0.5` 混合多个数据集。 Chat Templating 的工作原理如下: ```python dataset = [{"input_key": [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"}, ]}] tokenizer.apply_chat_template(dataset[0]["input_key"], tokenize=False) "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]" ``` 如何指定训练和测试数据分区 ? 你可以使用 `data_type@data_dir` 的方式指定, 比如下面的数据集可以设置为 `--dataset json@./data` ``` data ├── test.jsonl └── train.jsonl ``` > [!NOTE] >默认情况下我们使用 `train` 和 `test` 作为 split 区分 Huggingface 的训练/测试数据。 >`JSON key` 选项取决于具体的数据集。请参阅 [Reward Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/reward_dataset.py#L10) 和 [SFT Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/mai ### Supervised Fine-tuning OpenRLHF 的模型检查点完全兼容 HuggingFace 模型。您可以使用 `--pretrain {name or path}`、`--reward_pretrain {name or path}` 和 `--critic_pretrain {name or path}` 指定模型名称或路径。我们在 [HuggingFace OpenRLHF](https://huggingface.co/OpenRLHF) 上提供了一些预训练的检查点和数据集。 然后您可以使用我们在 [examples/scripts](./examples/scripts/) 目录中提供的启动脚本,或者使用以下命令启动训练: ```bash deepspeed --module openrlhf.cli.train_sft \ --max_len 4096 \ --dataset Open-Orca/OpenOrca \ --input_key question \ --output_key response \ --input_template 'User: {}\nAssistant: ' \ --train_batch_size 256 \ --micro_train_batch_size 2 \ --max_samples 500000 \ --pretrain meta-llama/Meta-Llama-3-8B \ --save_path ./checkpoint/llama3-8b-sft \ --save_steps -1 \ --logging_steps 1 \ --eval_steps -1 \ --zero_stage 2 \ --max_epochs 1 \ --bf16 \ --flash_attn \ --learning_rate 5e-6 \ --gradient_checkpointing \ --packing_samples \ --load_checkpoint \ --use_wandb {wandb_token} # 支持 HF tokenizer.apply_chat_template # --apply_chat_template # --input_key {JSON Key} # --tokenizer_chat_template {HF Chat Template} # 也可用于 continued pre-training # --pretrain_mode ``` > [!NOTE] > OpenRLHF SFT/DPO/RewardModel/PPO 训练支持 `--packing_samples` [基于 `--flash_attn`](https://github.com/MeetKai/functionary/tree/main/functionary/train/packing) ### Reward Model Training ```bash deepspeed --module openrlhf.cli.train_rm \ --save_path ./checkpoint/llama3-8b-rm \ --save_steps -1 \ --logging_steps 1 \ --eval_steps -1 \ --train_batch_size 256 \ --micro_train_batch_size 1 \ --pretrain OpenRLHF/Llama-3-8b-sft-mixture \ --bf16 \ --max_epochs 1 \ --max_len 8192 \ --zero_stage 3 \ --learning_rate 9e-6 \ --dataset OpenRLHF/preference_dataset_mixture2_and_safe_pku \ --apply_chat_template \ --chosen_key chosen \ --rejected_key rejected \ --flash_attn \ --packing_samples \ --gradient_checkpointing \ --load_checkpoint \ --use_wandb {wandb_token} ``` ### 不使用 Ray 的 PPO ```bash deepspeed --module openrlhf.cli.train_ppo \ --pretrain OpenRLHF/Llama-3-8b-sft-mixture \ --reward_pretrain OpenRLHF/Llama-3-8b-rm-mixture \ --save_path ./checkpoint/llama-3-8b-rlhf \ --save_steps -1 \ --logging_steps 1 \ --eval_steps -1 \ --micro_train_batch_size 2 \ --train_batch_size 128 \ --micro_rollout_batch_size 4 \ --rollout_batch_size 1024 \ --max_epochs 1 \ --prompt_max_len 1024 \ --generate_max_len 1024 \ --zero_stage 2 \ --bf16 \ --actor_learning_rate 5e-7 \ --critic_learning_rate 9e-6 \ --init_kl_coef 0.01 \ --prompt_data OpenRLHF/prompt-collection-v0.1 \ --input_key context_messages \ --apply_chat_template \ --max_samples 100000 \ --normalize_reward \ --adam_offload \ --flash_attn \ --gradient_checkpointing \ --load_checkpoint \ --use_wandb {wandb_token} # 支持远程 reward model (HTTP) # --remote_rm_url http://localhost:5000/get_reward ``` ### 使用 Ray 和 vLLM 的 PPO 为了提高 RLHF 训练速度或支持 70B 模型,我们可以使用 Ray 和 vLLM 加速的 PPO ```bash # 在容器中启动 Ray 的主节点 ray start --head --node-ip-address 0.0.0.0 --num-gpus 8 # 如果要在更多节点上启动 Ray,请使用 ray start --address {MASTER-NODE-ADDRESS}:6379 --num-gpus 8 ray job submit --address="http://127.0.0.1:8265" \ --runtime-env-json='{"working_dir": "/openrlhf"}' \ -- python3 -m openrlhf.cli.train_ppo_ray \ --ref_num_nodes 1 \ --ref_num_gpus_per_node 2 \ --reward_num_nodes 1 \ --reward_num_gpus_per_node 2 \ --critic_num_nodes 1 \ --critic_num_gpus_per_node 2 \ --actor_num_nodes 1 \ --actor_num_gpus_per_node 2 \ --vllm_num_engines 2 \ --vllm_tensor_parallel_size 2 \ --colocate_critic_reward \ --colocate_actor_ref \ --pretrain OpenRLHF/Llama-3-8b-sft-mixture \ --reward_pretrain OpenRLHF/Llama-3-8b-rm-mixture \ --save_path /openrlhf/examples/checkpoint/llama3-8b-rlhf \ --micro_train_batch_size 8 \ --train_batch_size 128 \ --micro_rollout_batch_size 32 \ --rollout_batch_size 1024 \ --max_samples 100000 \ --max_epochs 1 \ --prompt_max_len 1024 \ --generate_max_len 1024 \ --zero_stage 3 \ --bf16 \ --actor_learning_rate 5e-7 \ --critic_learning_rate 9e-6 \ --init_kl_coef 0.01 \ --prompt_data OpenRLHF/prompt-collection-v0.1 \ --input_key context_messages \ --apply_chat_template \ --normalize_reward \ --packing_samples \ --adam_offload \ --flash_attn \ --gradient_checkpointing \ --load_checkpoint \ --use_wandb {wandb_token} # 支持远程 reward model (HTTP) # --remote_rm_url http://localhost:5000/get_reward ``` > [!NOTE] > 不设置 `--vllm_num_engines` 则是不使用 vLLM engine。 > 您也可以通过 ``setup_commands`` 让 Ray 自动初始化环境, 比如 `--runtime-env-json='{"setup_commands": ["pip install openrlhf[vllm]"]}'` 所有支持算法的启动脚本和文档在 [example/scripts](./examples/scripts/) 和 [Documents - Usage](https://openrlhf.readthedocs.io/en/latest/usage.html) ## 性能 我们通过启用Adam卸载、奖励模型(RM)和参考模型(Ref)卸载等技术,尽可能优化了DSChat的性能,从而在推理阶段增加小批量大小并避免内存不足问题。我们甚至修复了DSChat中的一些bug,以启用LLaMA2的混合引擎(HE)。使用优化后的DSChat和OpenRLHF训练1024个提示需要1个PPO轮次的平均时间(秒)如下: | **Size** | **NVIDIA A800 GPUs** | **Optimized DSChat (with Hybrid Engine)** | **OpenRLHF** | **Speedup** | | :---: | :---: | :---: | :---: | :---: | | 7B | 16 | 855.09 | 471.11 | 1.82x | | 13B | 32 | 1528.93 | 608.93 | 2.5x | | 34B | 32 | 3634.98 | 1526.4 | 2.4x | | 70B | 32 | 10407.0 | 4488.53 | 2.3x | > [!NOTE] > 数据已经过时; 请参考后面的调优指南重新测试 ## 调优指南 为了获得最佳的性能,我们建议您分配更多的节点给 vLLM Engine。例如,对于 70B 模型以及 32 张 A100,建议分配 16 张以上 A100 给 vLLM Engine,8 张给 Actor 模型,以及最后 8 张给 Critic 模型,同时开启 `--colocate_critic_reward`, `--colocate_actor_ref` 或者 `--ref_reward_offload (可选)` 选项合并部分节点。最后您应该尽可能增大 `--rollout_micro_batch_size` ,以及减小 vLLM 的 TP 切分数量。训练阶段的 `micro_train_batch_size` 也是越大越好,请同时使用 `--packing_samples` 。为 vLLM 开启 `enable_prefix_caching` 当 `n_samples_per_prompt > 1`. 当GPU数量足够多的时候请关闭 `--adam_offload`. ## 使用 OpenRLHF 的公司和组织 - ByteDance - NexusFlow - Baidu - Jülich Supercomputing Centre (JSC) - Berkeley Starling Team - Tencent - Alibaba - Google - China Telecom - ... ## 加入我们 **如何加入?** 1. 通过联系邮箱 [email protected] 或者加入 [GitHub Organization](https://github.com/OpenRLHF)。请包含以下信息: - 您的姓名 - 您的 GitHub 用户名 - 您感兴趣的领域 - 您在 NLP 和/或 AI 相关的技能和经验 2. 您也可以通过官方 GitHub [OpenRLHF ↗](https://github.com/OpenRLHF/OpenRLHF) 项目页面加入我们。只需创建一个关于您想要贡献的兴趣的 issue,我们会与您联系。 **您能做什么?** 1. 加入团队,参与 OpenRLHF 项目的开发。 2. 通过提交 pull 请求来为项目做出贡献。 3. 帮助改进文档,修复 bugs 或创建新功能。 4. 分享项目并帮助我们发展社区。 ## 赞助我们 您的赞助可以帮助我们维护和改进 OpenRLHF。如果您觉得这个项目有用,请考虑赞助我们。您可以在 [Open Collective ↗](https://opencollective.com/OpenRLHF) 上赞助我们。 ## 星图 [![Star History Chart](https://api.star-history.com/svg?repos=OpenRLHF/OpenRLHF&type=Date)](https://star-history.com/#OpenRLHF/OpenRLHF&Date) ## 贡献者 非常感谢所有的贡献者!如果您想贡献,请随时创建 pull 请求或创建 issue。 <a href="https://github.com/OpenRLHF/OpenRLHF/graphs/contributors"> <img src="https://contrib.rocks/image?repo=OpenRLHF/OpenRLHF" /> </a> ## 引用与致谢 我们想要对以下项目和组织在 AI 和 NLP 领域的贡献表示感谢: - [Hugging Face Transformers ↗](https://github.com/huggingface/transformers) - [OpenAI GPT ↗](https://github.com/openai/gpt-3) - [LLaMA ↗](https://llama.meta.com/) - [DeepSpeed ↗](https://github.com/microsoft/DeepSpeed) - [Ray ↗](https://github.com/ray-project/ray) 我们的项目还想要感谢 [ColossalChat](https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat) 和 [DeepSpeedChat](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat)。在项目的早期阶段,我们参考了他们的代码设计。 (2024/7) 我们的 GitHub 组织从 OpenLLMAI 迁移到了 OpenRLHF. ## 引用 ``` @article{hu2024openrlhf, title={OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework}, author={Jian Hu and Xibin Wu and Weixun Wang and Xianyu and Dehao Zhang and Yu Cao}, journal={arXiv preprint arXiv:2405.11143}, year={2024} } ``` ______________________________________________________________________ *OpenRLHF © 2024 OpenRLHF. 版权所有。*
{ "source": "hkust-nlp/simpleRL-reason", "title": "train/README_zh.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/train/README_zh.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 15030 }
![Logo](https://picgo-1258602555.cos.ap-nanjing.myqcloud.com/icon.png) # [latex2sympy2](https://github.com/OrangeX4/latex2sympy) ## About `latex2sympy2` parses **LaTeX math expressions** and converts it into the equivalent **SymPy form**. The latex2sympy2 is adapted from [augustt198/latex2sympy](https://github.com/augustt198/latex2sympy) and [purdue-tlt / latex2sympy](https://github.com/purdue-tlt/latex2sympy). This project is a part of a VS Code extension called [Latex Sympy Calculator](https://marketplace.visualstudio.com/items?itemName=OrangeX4.latex-sympy-calculator). It is designed for providing people writing in latex or markdown a ability to calculate something when writing math expression. [ANTLR](http://www.antlr.org/) is used to generate the parser. ## Features * **Arithmetic:** Add (+), Sub (-), Dot Mul (·), Cross Mul (×), Frac (/), Power (^), Abs (|x|), Sqrt (√), etc... * **Alphabet:** a - z, A - Z, α - ω, Subscript (x_1), Accent Bar(ā), etc... * **Common Functions:** gcd, lcm, floor, ceil, max, min, log, ln, exp, sin, cos, tan, csc, sec, cot, arcsin, sinh, arsinh, etc... * **Funcion Symbol:** f(x), f(x-1,), g(x,y), etc... * **Calculous:** Limit ($lim_{n\to\infty}$), Derivation ($\frac{d}{dx}(x^2+x)$), Integration ($\int xdx$), etc... * **Linear Algebra:** Matrix, Determinant, Transpose, Inverse, Elementary Transformation, etc... * **Other:** Binomial... **NOTICE:** It will do some irreversible calculations when converting determinants, transposed matrixes and elementary transformations... ## Installation ``` pip install latex2sympy2 ``` **Requirements:** `sympy` and `antlr4-python3-runtime` packages. ## Usage ### Basic In Python: ```python from latex2sympy2 import latex2sympy, latex2latex tex = r"\frac{d}{dx}(x^{2}+x)" # Or you can use '\mathrm{d}' to replace 'd' latex2sympy(tex) # => "Derivative(x**2 + x, x)" latex2latex(tex) # => "2 x + 1" ``` ### Examples |LaTeX|Converted SymPy|Calculated Latex| |-----|-----|---------------| |`x^{3}` $x^{3}$| `x**3`|`x^{3}` $x^{3}$| |`\frac{d}{dx} tx` $\frac{d}{dx}tx$|`Derivative(x*t, x)`|`t` $t$| |`\sum_{i = 1}^{n} i` $\sum_{i = 1}^{n} i$|`Sum(i, (i, 1, n))`|`\frac{n \left(n + 1\right)}{2}` $\frac{n \left(n + 1\right)}{2}$| |`\int_{a}^{b} \frac{dt}{t}`|`Integral(1/t, (t, a, b))`|`-\log{(a)} + \log{(b)}` $-\log{(a)} + \log{(b)}$| |`(2x^3 - x + z)|_{x=3}` $(2x^3 - x + z)\|_{x=3}$|`z + 51`| `z + 51` $z + 51$ | If you want to read the math formula, you can click [GitNotes](https://notes.orangex4.cool/?git=github&github=OrangeX4/latex2sympy). ### Solve Equation ``` latex # Before x + y = 1 # After [ y = 1 - x, \ x = 1 - y] ``` ### Eval At ``` latex # Before (x+2)|_{x=y+1} # After y + 3 ``` ### Matrix #### Identity matrix ``` tex = r"\bm{I}_3" latex2sympy(tex) # => "Matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]])" ``` #### Determinant ``` python from latex2sympy2 import latex2sympy tex = r"\begin{vmatrix} x & 0 & 0 \\ 0 & x & 0 \\ 0 & 0 & x \end{vmatrix}" latex2sympy(tex) # => "x^{3}" ``` #### Transpose ``` python from latex2sympy2 import latex2sympy tex = r"\begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}^T" # Or you can use "\begin{pmatrix}1&2&3\\4&5&6\\7&8&9\end{pmatrix}'" latex2sympy(tex) # => "Matrix([[1, 4, 7], [2, 5, 8], [3, 6, 9]])" ``` #### Elementary Transformation ``` python from latex2sympy2 import latex2sympy matrix = r''' \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{pmatrix} ''' # Scale the row with grammar "\xrightarrow{kr_n}" tex = matrix + r'\xrightarrow{3r_1}' latex2sympy(tex) # => "Matrix([[3, 6, 9], [4, 5, 6], [7, 8, 9]])" # Swap the cols with grammar "\xrightarrow{c_1<=>c_2}" # Of course, you can use "\leftrightarrow" to replace "<=>" tex = matrix + r'\xrightarrow{c_1<=>c_2}' latex2sympy(tex) # => "Matrix([[2, 1, 3], [5, 4, 6], [8, 7, 9]])" # Scale the second row and add it to the first row # with grammar "\xrightarrow{r_1+kr_2}" tex = matrix + r'\xrightarrow{r_1+kr_2}' latex2sympy(tex) # => "Matrix([[4*k + 1, 5*k + 2, 6*k + 3], [4, 5, 6], [7, 8, 9]])" # You can compose the transform with comma "," # and grammar "\xrightarrow[4r_3]{2r_1, 3r_2}" # Remember the priority of "{}" is higher than "[]" tex = matrix + r'\xrightarrow[4r_3]{2r_1, 3r_2}' latex2sympy(tex) # => "Matrix([[2, 4, 6], [12, 15, 18], [28, 32, 36]])" ``` ### Variances ``` python from latex2sympy2 import latex2sympy, variances, var, set_variances # Assign x a value of 1 latex2sympy(r"x = 1") # Assign x a matrix symbol with dimension of n x m latex2sympy(r"x \in \mathbb{R}^{n \times m}") # Calculate x + y latex2sympy(r"x + y") # => "y + 1" # Get all variances print(variances) # => "{x: 1}" # Get variance of "x" print(var["x"]) # => "1" # Reset all variances set_variances({}) latex2sympy(r"x + y") # => "x + y" ``` ### Complex Number Support ``` python from latex2sympy2 import set_real set_real(False) ``` ## Contributing If you want to add a new grammar, you can fork the code from [OrangeX4/latex2sympy](https://github.com/OrangeX4/latex2sympy). * To modify parser grammar, view the existing structure in `PS.g4`. * To modify the action associated with each grammar, look into `latex2sympy.py`. Contributors are welcome! Feel free to open a pull request or an issue.
{ "source": "hkust-nlp/simpleRL-reason", "title": "eval/latex2sympy/README.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/eval/latex2sympy/README.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 5301 }
### Training settings * Base model: LLaMA2 7B * SFT dataset: Ocra 500k * RM datasets: Anthropic HH, LMSys, OASST, WebGPT ~ 240K+ * PPO prompts: alpaca, Anthropic HH, OASST ~ 100K ### Question: Please introduce the game GTA5 **SFT** Human: Please introduce the game GTA5 Assistant: GTA5, or Grand Theft Auto V, is an action-adventure video game developed by Rockstar North and published by Rockstar Games. It was released in 2013 for the PlayStation 3 and Xbox 360, and later in 2014 for the PlayStation 4 and Xbox One. The game is set in the fictional city of Los Santos, based in Los Angeles, and follows the story of three criminals as they plan and execute heists. GTA5 is an open-world game, meaning that players can explore the game's vast and detailed world at their leisure, completing missions, engaging in side activities, and interacting with other characters. The game features an extensive array of vehicles, from cars to boats to planes, and players can use them to travel around the city and engage in various forms of criminal activity. GTA5 is known for its immersive gameplay, stunning graphics, and rich storytelling. It has received critical acclaim and has become one of the best-selling video games of all time, with over 150 million copies sold worldwide. **PPO** Human: Please introduce the game GTA5 Assistant: 1. **Game Name:** Grand Theft Auto V (GTA V) 2. **Platforms:** GTA V is available on PlayStation 3 and Xbox 360, and later released on PlayStation 4, Xbox One, and Microsoft Windows. 3. **Genre:** Action-adventure game 4. **Gameplay:** GTA V is an open-world game where players can explore the fictional city of Los Santos and its surrounding areas, completing missions, engaging in side activities, and interacting with non-player characters. The game features a third-person perspective and allows players to freely roam the game world, engage in combat, and complete objectives. 5. **Storyline:** The game follows the story of three protagonists, Michael De Santa, Franklin Clinton, and Trevor Philips, as they become embroiled in a series of heists and other criminal activities. The game's storyline is considered one of the strongest aspects of GTA 6. **Graphics and Sound:** GTA V features stunning graphics and realistic character animations, making the game world feel alive and immersive. The soundtrack and voice acting are also highly praised, with a diverse and engaging musical score that complements the game's action-packed gameplay. 7. **Reception:** GTA V received critical acclaim upon its release, with many praising its storyline, gameplay, graphics, and sound. It is considered one of the best video games of all time and has sold over 150 million copies worldwide, making it one of the best-selling video games of all time. In summary, GTA V is an action-adventure game set in a massive open world, with a complex and engaging storyline, stunning graphics, and highly praised soundtrack and voice acting. It is considered one of the greatest video games of all time and continues to be a popular choice among gamers worldwide. ### Question: Which GPU is better, AMD or NVIDIA? **SFT** Human: Which GPU is better, AMD or NVIDIA? Assistant: The performance and suitability of a GPU (Graphics Processing Unit) depend on various factors, including the specific task or application being used, the model and generation of the GPU, and the system it is being used in. Both AMD and NVIDIA are well-known and reputable manufacturers of GPUs, and they both have a wide range of models and generations. In general, the performance of a GPU can vary depending on the specific task or application being used. For example, AMD's GPUs are known for their performance in gaming and video editing, while NVIDIA's GPUs are known for their performance in machine learning and deep learning. It is important to note that the performance of a GPU can also vary depending on the generation of the GPU. Newer generations of GPUs tend to have better performance and features than older generations. Ultimately, the best GPU for you will depend on your specific needs and the tasks you plan to use it for. It is recommended to research and compare the specific models and generations of AMD and NVIDIA GPUs to determine which one is the best fit for your needs. **PPO** Human: Which GPU is better, AMD or NVIDIA? Assistant: 1. **Performance:** In terms of raw performance, NVIDIA typically outperforms AMD in most cases, especially in gaming and video editing. 2. **Price:** NVIDIA GPUs tend to be more expensive than AMD GPUs, although this can vary depending on the specific model and its features. 3. **Architecture:** NVIDIA's GPU architecture is based on the CUDA (Compute Unified Device Architecture) platform, which allows for more efficient processing of parallel computing tasks. AMD's GPU architecture, on the other hand, is based on the Stream Compute architecture, which is designed for general-purpose computing. 4. **Application Support:** NVIDIA's GPUs have better support for certain applications, particularly in the fields of deep learning and artificial intelligence. AMD's GPUs, however, have better support for certain games and video editing software. In summary, NVIDIA's GPUs tend to be more powerful and efficient for certain tasks, while AMD's GPUs are better suited for other tasks and have better support for certain applications. Ultimately, the choice between the two depends on the specific needs and preferences of the user.
{ "source": "hkust-nlp/simpleRL-reason", "title": "train/docs/ppo_examples.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/train/docs/ppo_examples.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 5523 }
MIT OpenCourseWare: - Solving Quantitative Reasoning Problems with Language Models. https://openreview.net/forum?id=IFXTZERXdM7
{ "source": "hkust-nlp/simpleRL-reason", "title": "eval/data/minerva_math/README.md", "url": "https://github.com/hkust-nlp/simpleRL-reason/blob/main/eval/data/minerva_math/README.md", "date": "2025-01-25T07:16:58", "stars": 2958, "description": "This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data", "file_size": 131 }
# Code of Conduct - Fast GraphRAG ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behaviour that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologising to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behaviour include: * The use of sexualised language or imagery, and sexual attention or advances * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying and enforcing our standards of acceptable behaviour and will take appropriate and fair corrective action in response to any instances of unacceptable behaviour. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviours that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behaviour may be reported to the community leaders responsible for enforcement at . All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant](https://contributor-covenant.org/), version [1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct/code_of_conduct.md) and [2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct/code_of_conduct.md), and was generated by [contributing-gen](https://github.com/bttger/contributing-gen).
{ "source": "circlemind-ai/fast-graphrag", "title": "CODE_OF_CONDUCT.md", "url": "https://github.com/circlemind-ai/fast-graphrag/blob/main/CODE_OF_CONDUCT.md", "date": "2024-10-30T18:05:08", "stars": 2954, "description": "RAG that intelligently adapts to your use case, data, and queries", "file_size": 3107 }
<!-- omit in toc --> # Contributing to Fast GraphRAG First off, thanks for taking the time to contribute! ❤️ All types of contributions are encouraged and valued. See the [Table of Contents](#table-of-contents) for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. 🎉 > And if you like the project, but just don't have time to contribute, that's fine. There are other easy ways to support the project and show your appreciation, which we would also be very happy about: > - Star the project > - Tweet about it > - Refer this project in your project's readme > - Mention the project at local meetups and tell your friends/colleagues <!-- omit in toc --> ## Table of Contents - [Code of Conduct](#code-of-conduct) - [I Have a Question](#i-have-a-question) - [I Want To Contribute](#i-want-to-contribute) - [Reporting Bugs](#reporting-bugs) - [Suggesting Enhancements](#suggesting-enhancements) - [Your First Code Contribution](#your-first-code-contribution) - [Improving The Documentation](#improving-the-documentation) - [Styleguides](#styleguides) - [Commit Messages](#commit-messages) - [Join The Project Team](#join-the-project-team) ## Code of Conduct This project and everyone participating in it is governed by the [Fast GraphRAG Code of Conduct](https://github.com/circlemind-ai/fast-graphrag/blob/main/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to . ## I Have a Question First off, make sure to join the discord community: https://discord.gg/McpuSEkR Before you ask a question, it is best to search for existing [Issues](https://github.com/circlemind-ai/fast-graphrag/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first. If you then still feel the need to ask a question and need clarification, we recommend the following: - Open an [Issue](https://github.com/circlemind-ai/fast-graphrag/issues/new). - Provide as much context as you can about what you're running into. - Provide project and platform versions (python, os, etc), depending on what seems relevant. We will then take care of the issue as soon as possible. ## I Want To Contribute > ### Legal Notice <!-- omit in toc --> > When contributing to this project, you must agree that you have authored 100% of the content, that you have the necessary rights to the content and that the content you contribute may be provided under the project licence. ### Reporting Bugs <!-- omit in toc --> #### Before Submitting a Bug Report A good bug report shouldn't leave others needing to chase you up for more information. Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report. Please complete the following steps in advance to help us fix any potential bug as fast as possible. - Make sure that you are using the latest version. - Determine if your bug is really a bug and not an error on your side e.g. using incompatible environment components/versions. If you are looking for support, you might want to check Discord first. - To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/circlemind-ai/fast-graphrag/issues?q=label%3Abug). - Also make sure to search the internet (including Stack Overflow) to see if users outside of the GitHub community have discussed the issue. - Collect all important information about the bug <!-- omit in toc --> #### How Do I Submit a Good Bug Report? > You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead sensitive bugs must be sent by email to [email protected] We use GitHub issues to track bugs and errors. If you run into an issue with the project: - Open an [Issue](https://github.com/circlemind-ai/fast-graphrag/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.) - Explain the behavior you would expect and the actual behavior. - Please provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem and create a reduced test case. - Provide the information you collected in the previous section. Once it's filed: - The project team will label the issue accordingly. - A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs with the `needs-repro` tag will not be addressed until they are reproduced. - If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as `critical`), and the issue will be left to be [implemented by someone](#your-first-code-contribution). <!-- You might want to create an issue template for bugs and errors that can be used as a guide and that defines the structure of the information to be included. If you do so, reference it here in the description. --> ### Suggesting Enhancements This section guides you through submitting an enhancement suggestion for Fast GraphRAG, **including completely new features and minor improvements to existing functionality**. Following these guidelines will help maintainers and the community to understand your suggestion and find related suggestions. <!-- omit in toc --> #### Before Submitting an Enhancement - Make sure that you are using the latest version. - Perform a [search](https://github.com/circlemind-ai/fast-graphrag/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. - Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to convince the project's developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on/plugin library. <!-- omit in toc --> #### How Do I Submit a Good Enhancement Suggestion? Enhancement suggestions are tracked as [GitHub issues](https://github.com/circlemind-ai/fast-graphrag/issues). - Use a **clear and descriptive title** for the issue to identify the suggestion. - Provide a **step-by-step description of the suggested enhancement** in as many details as possible. - **Describe the current behavior** and **explain which behavior you expected to see instead** and why. At this point you can also tell which alternatives do not work for you. - **Explain why this enhancement would be useful** to most Fast GraphRAG users. You may also want to point out the other projects that solved it better and which could serve as inspiration.
{ "source": "circlemind-ai/fast-graphrag", "title": "CONTRIBUTING.md", "url": "https://github.com/circlemind-ai/fast-graphrag/blob/main/CONTRIBUTING.md", "date": "2024-10-30T18:05:08", "stars": 2954, "description": "RAG that intelligently adapts to your use case, data, and queries", "file_size": 7469 }
<h1 align="center"> <img width="800" src="banner.png" alt="circlemind fast-graphrag"> </h1> <h4 align="center"> <a href="https://github.com/circlemind-ai/fast-graphrag/blob/main/LICENSE"> <img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="fast-graphrag is released under the MIT license." alt="Fast GraphRAG by Circlemind"/> </a> <a href="https://github.com/circlemind-ai/fast-graphrag/blob/main/CONTRIBUTING.md"> <img src="https://img.shields.io/badge/PRs-Welcome-brightgreen" alt="PRs welcome!" /> </a> <a href="https://circlemind.co"> <img src="https://img.shields.io/badge/Project-Page-Green" alt="Circlemind Page" /> </a> <img src="https://img.shields.io/badge/python->=3.10.1-blue"> </h4> <p align="center"> <p align="center"><b>Streamlined and promptable Fast GraphRAG framework designed for interpretable, high-precision, agent-driven retrieval workflows. <br> <a href="https://circlemind.co/"> Looking for a Managed Service? » </a> </b> </p> </p> <h4 align="center"> <a href="#install">Install</a> | <a href="#quickstart">Quickstart</a> | <a href="https://discord.gg/DvY2B8u4sA">Community</a> | <a href="https://github.com/circlemind-ai/fast-graphrag/issues/new?assignees=&labels=&projects=&template=%F0%9F%90%9E-bug-report.md&title=">Report Bug</a> | <a href="https://github.com/circlemind-ai/fast-graphrag/issues/new?assignees=&labels=&projects=&template=%F0%9F%92%A1-feature-request.md&title=">Request Feature</a> </h4> > [!NOTE] > Using *The Wizard of Oz*, `fast-graphrag` costs $0.08 vs. `graphrag` $0.48 — **a 6x costs saving** that further improves with data size and number of insertions. ## News (and Coming Soon) - [ ] Support for IDF weightening of entities - [x] Support for generic entities and concepts (initial commit) - [x] [2024.12.02] Benchmarks comparing Fast GraphRAG to LightRAG, GraphRAG and VectorDBs released [here](https://github.com/circlemind-ai/fast-graphrag/blob/main/benchmarks/README.md) ## Features - **Interpretable and Debuggable Knowledge:** Graphs offer a human-navigable view of knowledge that can be queried, visualized, and updated. - **Fast, Low-cost, and Efficient:** Designed to run at scale without heavy resource or cost requirements. - **Dynamic Data:** Automatically generate and refine graphs to best fit your domain and ontology needs. - **Incremental Updates:** Supports real-time updates as your data evolves. - **Intelligent Exploration:** Leverages PageRank-based graph exploration for enhanced accuracy and dependability. - **Asynchronous & Typed:** Fully asynchronous, with complete type support for robust and predictable workflows. Fast GraphRAG is built to fit seamlessly into your retrieval pipeline, giving you the power of advanced RAG, without the overhead of building and designing agentic workflows. ## Install **Install from source (recommended for best performance)** ```bash # clone this repo first cd fast_graphrag poetry install ``` **Install from PyPi (recommended for stability)** ```bash pip install fast-graphrag ``` ## Quickstart Set the OpenAI API key in the environment: ```bash export OPENAI_API_KEY="sk-..." ``` Download a copy of *A Christmas Carol* by Charles Dickens: ```bash curl https://raw.githubusercontent.com/circlemind-ai/fast-graphrag/refs/heads/main/mock_data.txt > ./book.txt ``` Optional: Set the limit for concurrent requests to the LLM (i.e., to control the number of tasks processed simultaneously by the LLM, this is helpful when running local models) ```bash export CONCURRENT_TASK_LIMIT=8 ``` Use the Python snippet below: ```python from fast_graphrag import GraphRAG DOMAIN = "Analyze this story and identify the characters. Focus on how they interact with each other, the locations they explore, and their relationships." EXAMPLE_QUERIES = [ "What is the significance of Christmas Eve in A Christmas Carol?", "How does the setting of Victorian London contribute to the story's themes?", "Describe the chain of events that leads to Scrooge's transformation.", "How does Dickens use the different spirits (Past, Present, and Future) to guide Scrooge?", "Why does Dickens choose to divide the story into \"staves\" rather than chapters?" ] ENTITY_TYPES = ["Character", "Animal", "Place", "Object", "Activity", "Event"] grag = GraphRAG( working_dir="./book_example", domain=DOMAIN, example_queries="\n".join(EXAMPLE_QUERIES), entity_types=ENTITY_TYPES ) with open("./book.txt") as f: grag.insert(f.read()) print(grag.query("Who is Scrooge?").response) ``` The next time you initialize fast-graphrag from the same working directory, it will retain all the knowledge automatically. ## Examples Please refer to the `examples` folder for a list of tutorials on common use cases of the library: - `custom_llm.py`: a brief example on how to configure fast-graphrag to run with different OpenAI API compatible language models and embedders; - `checkpointing.ipynb`: a tutorial on how to use checkpoints to avoid irreversible data corruption; - `query_parameters.ipynb`: a tutorial on how to use the different query parameters. In particular, it shows how to include references to the used information in the provided answer (using the `with_references=True` parameter). ## Contributing Whether it's big or small, we love contributions. Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. Check out our [guide](https://github.com/circlemind-ai/fast-graphrag/blob/main/CONTRIBUTING.md) to see how to get started. Not sure where to get started? You can join our [Discord](https://discord.gg/DvY2B8u4sA) and ask us any questions there. ## Philosophy Our mission is to increase the number of successful GenAI applications in the world. To do that, we build memory and data tools that enable LLM apps to leverage highly specialized retrieval pipelines without the complexity of setting up and maintaining agentic workflows. Fast GraphRAG currently exploit the personalized pagerank algorithm to explore the graph and find the most relevant pieces of information to answer your query. For an overview on why this works, you can check out the HippoRAG paper [here](https://arxiv.org/abs/2405.14831). ## Open-source or Managed Service This repo is under the MIT License. See [LICENSE.txt](https://github.com/circlemind-ai/fast-graphrag/blob/main/LICENSE) for more information. The fastest and most reliable way to get started with Fast GraphRAG is using our managed service. Your first 100 requests are free every month, after which you pay based on usage. <h1 align="center"> <img width="800" src="demo.gif" alt="circlemind fast-graphrag demo"> </h1> To learn more about our managed service, [book a demo](https://circlemind.co/demo) or see our [docs](https://docs.circlemind.co/quickstart).
{ "source": "circlemind-ai/fast-graphrag", "title": "README.md", "url": "https://github.com/circlemind-ai/fast-graphrag/blob/main/README.md", "date": "2024-10-30T18:05:08", "stars": 2954, "description": "RAG that intelligently adapts to your use case, data, and queries", "file_size": 6923 }
## Benchmarks We validate the benchmark results provided in [HippoRAG](https://arxiv.org/abs/2405.14831), as well as comparing with other methods: - NaiveRAG (vector dbs) using the OpenAI embedder `text-embedding-3-small` - [LightRAG](https://github.com/HKUDS/LightRAG) - [GraphRAG](https://github.com/gusye1234/nano-graphrag) (we use the implementation provided by `nano-graphrag`, based on the original [Microsoft GraphRAG](https://github.com/microsoft/graphrag)) ### Results **2wikimultihopQA** | # Queries | Method | All queries % | Multihop only % | |----------:|:--------:|--------------:|----------------:| | 51|||| | | VectorDB| 0.49| 0.32| | | LightRAG| 0.47| 0.32| | | GraphRAG| 0.75| 0.68| | |**Circlemind**| **0.96**| **0.95**| | 101|||| | | VectorDB| 0.42| 0.23| | | LightRAG| 0.45| 0.28| | | GraphRAG| 0.73| 0.64| | |**Circlemind**| **0.93**| **0.90**| **Circlemind is up to 4x more accurate than VectorDB RAG.** **HotpotQA** | # Queries | Method | All queries % | |----------:|:--------:|--------------:| | 101||| | | VectorDB| 0.78| | | LightRAG| 0.55| | | GraphRAG| -*| | |**Circlemind**| **0.84**| *: crashes after half an hour of processing Below, find the insertion times for the 2wikimultihopqa benchmark (~800 chunks): | Method | Time (minutes) | |:--------:|-----------------:| | VectorDB| ~0.3| | LightRAG| ~25| | GraphRAG| ~40| |**Circlemind**| ~1.5| **Circlemind is 27x faster than GraphRAG while also being over 40% more accurate in retrieval.** ### Run it yourself The scripts in this directory will generate and evaluate the 2wikimultihopqa datasets on a subsets of 51 and 101 queries with the same methodology as in the HippoRAG paper. In particular, we evaluate the retrieval capabilities of each method, mesauring the percentage of queries for which all the required evidence was retrieved. We preloaded the results so it is enough to run `evaluate_dbs.xx` to get the numbers. You can also run `create_dbs.xx` to regenerate the databases for the different methods. A couple of NOTES: - you will need to set an OPENAI_API_KEY; - LightRAG and GraphRAG could take a over an 1 hour to process and they can be expensive; - when pip installing LightRAG, not all dependencies are added; to run it we simply deleted all the imports of each missing dependency (since we use OpenAI they are not necessary). - we also benchmarked on the HotpotQA dataset (we will soon release the code for that as well). The output will look similar to the following (the exact numbers could vary based on your graph configuration) ``` Evaluation of the performance of different RAG methods on 2wikimultihopqa (51 queries) VectorDB Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.49019607843137253 [multihop only] Percentage of queries with perfect retrieval: 0.32432432432432434 LightRAG [local mode] Loading dataset... Percentage of queries with perfect retrieval: 0.47058823529411764 [multihop only] Percentage of queries with perfect retrieval: 0.32432432432432434 GraphRAG [local mode] Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.7450980392156863 [multihop only] Percentage of queries with perfect retrieval: 0.6756756756756757 Circlemind Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.9607843137254902 [multihop only] Percentage of queries with perfect retrieval: 0.9459459459459459 Evaluation of the performance of different RAG methods on 2wikimultihopqa (101 queries) VectorDB Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.4158415841584158 [multihop only] Percentage of queries with perfect retrieval: 0.2318840579710145 LightRAG [local mode] Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.44554455445544555 [multihop only] Percentage of queries with perfect retrieval: 0.2753623188405797 GraphRAG [local mode] Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.7326732673267327 [multihop only] Percentage of queries with perfect retrieval: 0.6376811594202898 Circlemind Loading dataset... [all questions] Percentage of queries with perfect retrieval: 0.9306930693069307 [multihop only] Percentage of queries with perfect retrieval: 0.8985507246376812 ```
{ "source": "circlemind-ai/fast-graphrag", "title": "benchmarks/README.md", "url": "https://github.com/circlemind-ai/fast-graphrag/blob/main/benchmarks/README.md", "date": "2024-10-30T18:05:08", "stars": 2954, "description": "RAG that intelligently adapts to your use case, data, and queries", "file_size": 4739 }
--- name: "\U0001F41E Bug report" about: Create a report to help us improve title: '' labels: '' assignees: '' --- **Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Additional context** Add any other context about the problem here.
{ "source": "circlemind-ai/fast-graphrag", "title": ".github/ISSUE_TEMPLATE/🐞-bug-report.md", "url": "https://github.com/circlemind-ai/fast-graphrag/blob/main/.github/ISSUE_TEMPLATE/🐞-bug-report.md", "date": "2024-10-30T18:05:08", "stars": 2954, "description": "RAG that intelligently adapts to your use case, data, and queries", "file_size": 547 }
--- name: "\U0001F4A1 Feature request" about: Suggest an idea for this project title: '' labels: '' assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
{ "source": "circlemind-ai/fast-graphrag", "title": ".github/ISSUE_TEMPLATE/💡-feature-request.md", "url": "https://github.com/circlemind-ai/fast-graphrag/blob/main/.github/ISSUE_TEMPLATE/💡-feature-request.md", "date": "2024-10-30T18:05:08", "stars": 2954, "description": "RAG that intelligently adapts to your use case, data, and queries", "file_size": 607 }
# OpenAI Realtime Console This is an example application showing how to use the [OpenAI Realtime API](https://platform.openai.com/docs/guides/realtime) with [WebRTC](https://platform.openai.com/docs/guides/realtime-webrtc). ## Installation and usage Before you begin, you'll need an OpenAI API key - [create one in the dashboard here](https://platform.openai.com/settings/api-keys). Create a `.env` file from the example file and set your API key in there: ```bash cp .env.example .env ``` Running this application locally requires [Node.js](https://nodejs.org/) to be installed. Install dependencies for the application with: ```bash npm install ``` Start the application server with: ```bash npm run dev ``` This should start the console application on [http://localhost:3000](http://localhost:3000). This application is a minimal template that uses [express](https://expressjs.com/) to serve the React frontend contained in the [`/client`](./client) folder. The server is configured to use [vite](https://vitejs.dev/) to build the React frontend. This application shows how to send and receive Realtime API events over the WebRTC data channel and configure client-side function calling. You can also view the JSON payloads for client and server events using the logging panel in the UI. For a more comprehensive example, see the [OpenAI Realtime Agents](https://github.com/openai/openai-realtime-agents) demo built with Next.js, using an agentic architecture inspired by [OpenAI Swarm](https://github.com/openai/swarm). ## Previous WebSockets version The previous version of this application that used WebSockets on the client (not recommended in browsers) [can be found here](https://github.com/openai/openai-realtime-console/tree/websockets). ## License MIT
{ "source": "openai/openai-realtime-console", "title": "README.md", "url": "https://github.com/openai/openai-realtime-console/blob/main/README.md", "date": "2024-09-30T19:00:38", "stars": 2939, "description": "React app for inspecting, building and debugging with the Realtime API", "file_size": 1778 }
# Mochi 1 [Blog](https://www.genmo.ai/blog) | [Hugging Face](https://huggingface.co/genmo/mochi-1-preview) | [Playground](https://www.genmo.ai/play) | [Careers](https://jobs.ashbyhq.com/genmo) A state of the art video generation model by [Genmo](https://genmo.ai). https://github.com/user-attachments/assets/4d268d02-906d-4cb0-87cc-f467f1497108 ## News - ⭐ **November 26, 2024**: Added support for [LoRA fine-tuning](demos/fine_tuner/README.md) - ⭐ **November 5, 2024**: Consumer-GPU support for Mochi [natively in ComfyUI](https://x.com/ComfyUI/status/1853838184012251317) ## Overview Mochi 1 preview is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation. This model dramatically closes the gap between closed and open video generation systems. We’re releasing the model under a permissive Apache 2.0 license. Try this model for free on [our playground](https://genmo.ai/play). ## Installation Install using [uv](https://github.com/astral-sh/uv): ```bash git clone https://github.com/genmoai/models cd models pip install uv uv venv .venv source .venv/bin/activate uv pip install setuptools uv pip install -e . --no-build-isolation ``` If you want to install flash attention, you can use: ``` uv pip install -e .[flash] --no-build-isolation ``` You will also need to install [FFMPEG](https://www.ffmpeg.org/) to turn your outputs into videos. ## Download Weights Use [download_weights.py](scripts/download_weights.py) to download the model + VAE to a local directory. Use it like this: ```bash python3 ./scripts/download_weights.py weights/ ``` Or, directly download the weights from [Hugging Face](https://huggingface.co/genmo/mochi-1-preview/tree/main) or via `magnet:?xt=urn:btih:441da1af7a16bcaa4f556964f8028d7113d21cbb&dn=weights&tr=udp://tracker.opentrackr.org:1337/announce` to a folder on your computer. ## Running Start the gradio UI with ```bash python3 ./demos/gradio_ui.py --model_dir weights/ --cpu_offload ``` Or generate videos directly from the CLI with ```bash python3 ./demos/cli.py --model_dir weights/ --cpu_offload ``` If you have a fine-tuned LoRA in the safetensors format, you can add `--lora_path <path/to/my_mochi_lora.safetensors>` to either `gradio_ui.py` or `cli.py`. ## API This repository comes with a simple, composable API, so you can programmatically call the model. You can find a full example [here](demos/api_example.py). But, roughly, it looks like this: ```python from genmo.mochi_preview.pipelines import ( DecoderModelFactory, DitModelFactory, MochiSingleGPUPipeline, T5ModelFactory, linear_quadratic_schedule, ) pipeline = MochiSingleGPUPipeline( text_encoder_factory=T5ModelFactory(), dit_factory=DitModelFactory( model_path=f"weights/dit.safetensors", model_dtype="bf16" ), decoder_factory=DecoderModelFactory( model_path=f"weights/decoder.safetensors", ), cpu_offload=True, decode_type="tiled_spatial", ) video = pipeline( height=480, width=848, num_frames=31, num_inference_steps=64, sigma_schedule=linear_quadratic_schedule(64, 0.025), cfg_schedule=[6.0] * 64, batch_cfg=False, prompt="your favorite prompt here ...", negative_prompt="", seed=12345, ) ``` ## Fine-tuning with LoRA We provide [an easy-to-use trainer](demos/fine_tuner/README.md) that allows you to build LoRA fine-tunes of Mochi on your own videos. The model can be fine-tuned on one H100 or A100 80GB GPU. ## Model Architecture Mochi 1 represents a significant advancement in open-source video generation, featuring a 10 billion parameter diffusion model built on our novel Asymmetric Diffusion Transformer (AsymmDiT) architecture. Trained entirely from scratch, it is the largest video generative model ever openly released. And best of all, it’s a simple, hackable architecture. Additionally, we are releasing an inference harness that includes an efficient context parallel implementation. Alongside Mochi, we are open-sourcing our video AsymmVAE. We use an asymmetric encoder-decoder structure to build an efficient high quality compression model. Our AsymmVAE causally compresses videos to a 128x smaller size, with an 8x8 spatial and a 6x temporal compression to a 12-channel latent space. ### AsymmVAE Model Specs |Params <br> Count | Enc Base <br> Channels | Dec Base <br> Channels |Latent <br> Dim | Spatial <br> Compression | Temporal <br> Compression | |:--:|:--:|:--:|:--:|:--:|:--:| |362M | 64 | 128 | 12 | 8x8 | 6x | An AsymmDiT efficiently processes user prompts alongside compressed video tokens by streamlining text processing and focusing neural network capacity on visual reasoning. AsymmDiT jointly attends to text and visual tokens with multi-modal self-attention and learns separate MLP layers for each modality, similar to Stable Diffusion 3. However, our visual stream has nearly 4 times as many parameters as the text stream via a larger hidden dimension. To unify the modalities in self-attention, we use non-square QKV and output projection layers. This asymmetric design reduces inference memory requirements. Many modern diffusion models use multiple pretrained language models to represent user prompts. In contrast, Mochi 1 simply encodes prompts with a single T5-XXL language model. ### AsymmDiT Model Specs |Params <br> Count | Num <br> Layers | Num <br> Heads | Visual <br> Dim | Text <br> Dim | Visual <br> Tokens | Text <br> Tokens | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |10B | 48 | 24 | 3072 | 1536 | 44520 | 256 | ## Hardware Requirements The repository supports both multi-GPU operation (splitting the model across multiple graphics cards) and single-GPU operation, though it requires approximately 60GB VRAM when running on a single GPU. While ComfyUI can optimize Mochi to run on less than 20GB VRAM, this implementation prioritizes flexibility over memory efficiency. When using this repository, we recommend using at least 1 H100 GPU. ## Safety Genmo video models are general text-to-video diffusion models that inherently reflect the biases and preconceptions found in their training data. While steps have been taken to limit NSFW content, organizations should implement additional safety protocols and careful consideration before deploying these model weights in any commercial services or products. ## Limitations Under the research preview, Mochi 1 is a living and evolving checkpoint. There are a few known limitations. The initial release generates videos at 480p today. In some edge cases with extreme motion, minor warping and distortions can also occur. Mochi 1 is also optimized for photorealistic styles so does not perform well with animated content. We also anticipate that the community will fine-tune the model to suit various aesthetic preferences. ## Related Work - [ComfyUI-MochiWrapper](https://github.com/kijai/ComfyUI-MochiWrapper) adds ComfyUI support for Mochi. The integration of Pytorch's SDPA attention was based on their repository. - [ComfyUI-MochiEdit](https://github.com/logtd/ComfyUI-MochiEdit) adds ComfyUI nodes for video editing, such as object insertion and restyling. - [mochi-xdit](https://github.com/xdit-project/mochi-xdit) is a fork of this repository and improve the parallel inference speed with [xDiT](https://github.com/xdit-project/xdit). - [Modal script](contrib/modal/readme.md) for fine-tuning Mochi on Modal GPUs. ## BibTeX ``` @misc{genmo2024mochi, title={Mochi 1}, author={Genmo Team}, year={2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished={\url{https://github.com/genmoai/models}} } ```
{ "source": "genmoai/mochi", "title": "README.md", "url": "https://github.com/genmoai/mochi/blob/main/README.md", "date": "2024-09-11T02:55:33", "stars": 2935, "description": "The best OSS video generation models", "file_size": 7711 }
# Mochi Community Contributions `mochi/contrib` contains community contributed pipelines for running and customizing Mochi. ## Index: - `mochi/contrib/modal` - [Script](contrib/modal/readme.md) for fine-tuning Mochi on Modal GPUs.
{ "source": "genmoai/mochi", "title": "contrib/README.md", "url": "https://github.com/genmoai/mochi/blob/main/contrib/README.md", "date": "2024-09-11T02:55:33", "stars": 2935, "description": "The best OSS video generation models", "file_size": 233 }
## Finetuning Mochi with LoRA on Modal This example demonstrates how to run the Mochi finetuner on Modal GPUs. ### Setup Install [Modal](https://modal.com/docs/guide). ```bash pip install modal modal setup ``` ### Fetch the dataset There is a labeled dataset for a dissolving visual effect available on Google Drive. Download it into the `mochi-tune-videos` modal volume with: ```bash modal run main::download_videos ``` ### Download the model weights Download the model weights from Hugging Face into the `mochi-tune-weights` modal volume with: ```bash modal run -d main::download_weights ``` Note that this download can take more than 30 minutes. The `-d` flag allows you to exit the terminal session without losing progress. ### Prepare the dataset We now run the preprocessing script to prepare the dataset for finetuning: ```bash modal run main::preprocess ``` This puts preprocessed training input into the `mochi-tune-videos-prepared` modal volume. ### Finetuning Finetune the model using the prepared dataset. You may configure the finetune run using the `lora.yaml` file, such as number of steps, learning rate, etc. Run the finetuning with: ```bash modal run -d main::finetune ``` This will produce a series of checkpoints, as well as video samples generated along the training process. You can view these files in the Modal `moshi-tune-finetunes` volume using the Storage tab in the dashboard. ### Inference You can now use the MochiLora class to generate videos from a prompt. The `main` entrypoint will initialize the model to use the specified LoRA weights from your finetuning run. ```bash modal run main ``` or with more parameters: ```bash modal run main lora-path="/finetunes/my_mochi_lora/model_1000.lora.safetensors" prompt="A pristine snowglobe featuring a winter scene sits peacefully. The glass begins to crumble into fine powder, as the entire sphere deteriorates into sparkling dust that drifts outward." ``` See modal run main --help for all inference options.
{ "source": "genmoai/mochi", "title": "contrib/modal/readme.md", "url": "https://github.com/genmoai/mochi/blob/main/contrib/modal/readme.md", "date": "2024-09-11T02:55:33", "stars": 2935, "description": "The best OSS video generation models", "file_size": 2001 }
# Mochi 1 LoRA Fine-tuner ![Mochi being made](../../assets/mochi-factory.webp) This folder contains tools for fine-tuning the Mochi 1 model. It supports [LoRA](https://arxiv.org/abs/2106.09685) fine-tuning on a single GPU. ## Quick Start (Single GPU) This shows you how to prepare your dataset for single GPU. First, setup the inference code and download Mochi 1 weights following [README.md](../../README.md). All commands below assume you are in the top-level directory of the Mochi repo. ### 1. Collect your videos and captions Collect your videos (supported formats: MP4, MOV) into a folder, e.g. `videos/`. Then, write a detailed description of each of the videos in a txt file with the same name. For example, ``` videos/ video_1.mp4 video_1.txt -- One-paragraph description of video_1 video_2.mp4 video_2.txt -- One-paragraph description of video_2 ... ``` ### 2. Process videos and captions (About 2 minutes) Update the paths in the command below to match your dataset. Videos are processed at 30 FPS, so make sure your videos are at least `num_frames / 30` seconds long. ```bash bash demos/fine_tuner/preprocess.bash -v videos/ -o videos_prepared/ -w weights/ --num_frames 37 ``` ### 3. Fine-tune the model Update `./demos/fine_tuner/configs/lora.yaml` to customize the fine-tuning process, including prompts to generate at various points of the fine-tuning process and the path to your prepared videos. Launch LoRA fine-tuning on single GPU: ```bash bash ./demos/fine_tuner/run.bash -c ./demos/fine_tuner/configs/lora.yaml -n 1 ``` Samples will be generated in `finetunes/my_mochi_lora/samples` every 200 steps. ### 4. Use your fine-tuned weights to generate videos! Update `--lora_path` to the path of your fine-tuned weights and run: ```python python3 ./demos/cli.py --model_dir weights/ --lora_path finetunes/my_mochi_lora/model_2000.lora.safetensors --num_frames 37 --cpu_offload --prompt "A delicate porcelain teacup sits on a marble countertop. The teacup suddenly shatters into hundreds of white ceramic shards that scatter through the air. The scene is bright and crisp with dramatic lighting." ``` You can increase the number of frames to generate a longer video. Finally, share your creations with the community by uploading your LoRA and sample videos to Hugging Face. ## System Requirements **Single GPU:** - 1x H100 or A100 (80 GB VRAM is recommended) - Less VRAM is required if training with less than 1 second long videos. **Supported video lengths:** Up to 85 frames (~2.8 seconds at 30 FPS) - Choose a frame count in increments of 6: 25, 31, 37, ... 79, 85. - Training on 37 frames uses 50 GB of VRAM. On 1 H100, each training step takes about 1.67 s/it, and you'll start seeing changes to your videos within 200-400 steps. Training for 1,000 steps takes about 30 minutes. Settings tested on 1x H100 SXM: | Frames | Video Length | VRAM | Time/step | num_qkv_checkpoint | num_ff_checkpoint | num_post_attn_checkpoint | |--------|--------------|------|-----------|-------------------|-------------------|-------------------------| | 37 frames | 1.2 second videos | 50 GB VRAM | 1.67 s/it | 48 | 48† | 48 | | 61 frames | 2.0 second videos | 64 GB VRAM | 3.35 s/it | 48 | 48† | 48 | | 79 frames | 2.6 second videos | 69-78 GB VRAM | 4.92 s/it | 48 | 48† | 48 | | 85 frames | 2.8 second videos | 80 GB VRAM | 5.44 s/it | 48 | 48 | 48 | *† As the VRAM is not fully used, you can lower `num_ff_checkpoint` to speed up training.* ## Technical Details - LoRA fine-tuning updates the query, key, and value projection matrices, as well as the output projection matrix. These settings are configurable in `./demos/fine_tuner/configs/lora.yaml`. - We welcome contributions and suggestions for improved settings. ## Known Limitations - No support for training on multiple GPUs - LoRA inference is restricted to 1-GPU (for now) ## Tips - Be as descriptive as possible in your captions. - A learning rate around 1e-4 or 2e-4 seems effective for LoRA fine-tuning. - For larger datasets or to customize the model aggressively, increase `num_steps` in in the YAML. - To monitor training loss, uncomment the `wandb` section in the YAML and run `wandb login` or set the `WANDB_API_KEY` environment variable. - Videos are trimmed to the **first** `num_frames` frames. Make sure your clips contain the content you care about near the beginning. You can check the trimmed versions after running `preprocess.bash` to make sure they look good. - When capturing HDR videos on an iPhone, convert your .mov files to .mp4 using the Handbrake application. Our preprocessing script won't produce the correct colorspace otherwise, and your fine-tuned videos may look overly bright. ### If you are running out of GPU memory, make sure: - `COMPILE_DIT=1` is set in `demos/fine_tuner/run.bash`. This enables model compilation, which saves memory and speeds up training! - `num_post_attn_checkpoint`, `num_ff_checkpoint`, and `num_qkv_checkpoint` are set to 48 in your YAML. You can checkpoint up to 48 layers, saving memory at the cost of slower training. - If all else fails, reduce `num_frames` when processing your videos and in your YAML. You can fine-tune Mochi on shorter videos, and still generate longer videos at inference time. ## Diffusers trainer The [Diffusers Python library](https://github.com/huggingface/diffusers) supports LoRA fine-tuning of Mochi 1 as well. Check out [this link](https://github.com/a-r-r-o-w/cogvideox-factory/tree/80d1150a0e233a1b2b98dd0367c06276989d049c/training/mochi-1) for more details.
{ "source": "genmoai/mochi", "title": "demos/fine_tuner/README.md", "url": "https://github.com/genmoai/mochi/blob/main/demos/fine_tuner/README.md", "date": "2024-09-11T02:55:33", "stars": 2935, "description": "The best OSS video generation models", "file_size": 5568 }
# AI Engineering Hub 🚀 Welcome to the **AI Engineering Hub**! ## 🌟 Why This Repo? AI Engineering is advancing rapidly, and staying at the forefront requires both deep understanding and hands-on experience. Here, you will find: - In-depth tutorials on **LLMs and RAGs** - Real-world **AI agent** applications - Examples to implement, adapt, and scale in your projects Whether you’re a beginner, practitioner, or researcher, this repo provides resources for all skill levels to experiment and succeed in AI engineering. --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## 📢 Contribute to the AI Engineering Hub! We welcome contributors! Whether you want to add new tutorials, improve existing code, or report issues, your contributions make this community thrive. Here’s how to get involved: 1. **Fork** the repository. 2. Create a new branch for your contribution. 3. Submit a **Pull Request** and describe the improvements. --- ## 📜 License This repository is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## 💬 Connect For discussions, suggestions, and more, feel free to [create an issue](https://github.com/patchy631/ai-engineering/issues) or reach out directly! Happy Coding! 🎉
{ "source": "patchy631/ai-engineering-hub", "title": "README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1622 }
# Build a reasoning model like DeepSeek-R1 This project implements building a reasoning model like DeepSeek-R1 using Unsloth. --- ## Setup and installations **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install unsloth vllm ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "Build-reasoning-model/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/Build-reasoning-model/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 856 }
# DeepSeek Finetuning This project fine-tunes DeepSeek (distilled Llama variant) using Unsloth and Ollama. --- ## Setup and installations **Setup Ollama**: ```bash # setup ollama on linux curl -fsSL https://ollama.com/install.sh | sh ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install unsloth ollama ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "DeepSeek-finetuning/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/DeepSeek-finetuning/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 954 }
# LaTeX-OCR This project leverages Llama 3.2 vision and Streamlit to create a LaTeX OCR app that converts images of LaTeX equations to LaTeX code. ## Demo Video Click below to watch the demo video of the AI Assistant in action: [Watch the video](LaTeX-OCR.mp4) ## Installation and setup **Setup Ollama**: *On Linux*: ```bash curl -fsSL https://ollama.com/install.sh | sh # pull llama 3.2 vision model ollama run llama3.2-vision ``` *On MacOS*: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" # get homebrew xcode-select --install brew install ollama # install ollama ollama pull llama3.2-vision # pull llama 3.2 vision model ollama run llama3.2-vision ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit ollama ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "LaTeX-OCR-with-Llama/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/LaTeX-OCR-with-Llama/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1477 }
# Convert ANY website into an API using Firecrawl This project lets you convert ANY website into an API using Firecrawl. - [Firecrawl](https://www.firecrawl.dev/i/api) is used to scrape websites. - Streamlit is used to create a web interface for the project. --- ## Setup and installations **Get Firecrawl API Key**: - Go to [Firecrawl](https://www.firecrawl.dev/i/api) and sign up for an account. - Once you have an account, go to the API Key page and copy your API key. - Paste your API key by creating a `.env` file as follows: ``` FIRECRAWL_API_KEY=your_api_key ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit firecrawl ``` --- ## Run the project Finally, run the project by running the following command: ```bash streamlit run app.py ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "Website-to-API-with-FireCrawl/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/Website-to-API-with-FireCrawl/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1400 }
# YouTube Trend Analysis with CrewAI and BrightData This project implements a YouTube Trend Analysis with CrewAI and BrightData. - [Bright Data](https://brdta.com/dailydoseofds) is used to scrape YouTube videos. - CrewAI is used to analyze the transcripts of the videos and generate a summary. - Streamlit is used to create a web interface for the project. --- ## Setup and installations **Get BrightData API Key**: - Go to [Bright Data](https://brdta.com/dailydoseofds) and sign up for an account. - Once you have an account, go to the API Key page and copy your API key. - Paste your API key by creating a `.env` file as follows: ``` BRIGHT_DATA_API_KEY=your_api_key ``` **Setup Ollama**: ```bash # setup ollama on linux curl -fsSL https://ollama.com/install.sh | sh # pull llama 3.2 model ollama pull llama3.2 ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit ollama crewai crewai-tools ``` --- ## Run the project Finally, run the project by running the following command: ```bash streamlit run app.py ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "Youtube-trend-analysis/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/Youtube-trend-analysis/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1686 }
# Agentic RAG using CrewAI This project leverages CrewAI to build an Agentic RAG that can search through your docs and fallbacks to web search in case it doesn't find the answer in the docs, have option to use either of deep-seek-r1 or llama 3.2 that runs locally. More details un Running the app section below! Before that, make sure you grab your FireCrawl API keys to search the web. **Get API Keys**: - [FireCrawl](https://www.firecrawl.dev/i/api) ### Watch Demo on YouTube [![Watch Demo on YouTube](https://github.com/patchy631/ai-engineering-hub/blob/main/agentic_rag/thumbnail/thumbnail.png)](https://youtu.be/O4yBW_GTRk0) ## Installation and setup **Get API Keys**: - [FireCrawl](https://www.firecrawl.dev/i/api) **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install crewai crewai-tools chonkie[semantic] markitdown qdrant-client fastembed ``` **Running the app**: To use deep-seek-rq use command ``` streamlit run app_deep_seek.py ```, for llama 3.2 use command ``` streamlit run app_llama3.2.py ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "agentic_rag/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/agentic_rag/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1651 }
# Enterprise-grade, agentic RAG over complex real-world docs The project uses EyelevelAI's state of the art document parsing and retrieval system GroundX. It's integrated as a custom tool with CrewAI. Before you start, quickly test it on your own document [here](https://dashboard.eyelevel.ai/xray) GroundX can also be deployed completely on premise as well, the code is open-source, here's their [GitHub repo](https://github.com/eyelevelai/groundx-on-prem). Grab your API keys's here. - [GroundX API keys](https://docs.eyelevel.ai/documentation/fundamentals/quickstart#step-1-getting-your-api-key) - [SERPER API keys](https://serper.dev/) ### Watch this tutorial on YouTube [![Watch this tutorial on YouTube](https://github.com/patchy631/ai-engineering-hub/blob/main/agentic_rag_deepseek/assets/thumbnail.png)](https://www.youtube.com/watch?v=79xvgj4wvHQ) --- ## Setup and installations **Setup Environment**: - Paste your API keys by creating a `.env` - Refer `.env.example` file **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install groundx crewai crewai-tools ``` **Running the app**: ```bash streamlit run app_deep_seek.py ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "agentic_rag_deepseek/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/agentic_rag_deepseek/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1766 }
# AI News generator This project leverages CrewAI and Cohere's Command-R:7B model to build an AI news generator! ## Installation and setup **Get API Keys**: - [Serper API Key](https://serper.dev/) - [Cohere API Key](https://dashboard.cohere.com/api-keys) **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install crewai crewai-tools ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "ai_news_generator/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/ai_news_generator/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 969 }
# Coding and Stock Analyst This project leverages Microsoft's AutoGen to build an advanced Coding and Stock Analyst. The solution is powered by Qualcomm's **Cloud AI 100 Ultra**, enabling high-performance LLM serving. Explore the [Qualcomm Cloud AI 100 Ultra Playground](http://bit.ly/Qualcomm-CloudAI100Ultra-Playground) to learn more. ## Demo Check out the demo video below to see the project in action: [![Demo Video]](https://youtu.be/ijHtziG0knY) ## Installation **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install imagine_sdk-0.4.1-py3-none-any.whl[langchain] pip install autogen-agentchat~=0.2 ``` ## Features - Advanced LLM-powered stock analysis - Auto-code generation for financial and analytical tasks - Optimized deployment using Qualcomm's Cloud AI 100 Ultra ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "autogen-stock-analyst/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/autogen-stock-analyst/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 957 }
# Content writing agentic-workflow This project leverages CrewAI Flow to scrape a website, prepare a social post and publish it, powered by a locally running Llama 3.2! ### Watch Demo on YouTube [![Watch Demo on YouTube](https://github.com/patchy631/ai-engineering-hub/blob/main/content_planner_flow/resources/thumbnail.png)](https://www.youtube.com/watch?v=Nor6vNl1NPo) ## Installation and setup **Get API Keys**: - [FireCrawl](https://docs.firecrawl.dev/introduction) - [Typefully](https://support.typefully.com/en/articles/8718287-typefully-api) **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install crewai crewai-tools ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "content_planner_flow/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/content_planner_flow/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1265 }
# MultiModal RAG with DeepSeek-Janus-Pro This project implements a MultiModal RAG with DeepSeek's latest model Janus-Pro. We use the following tools - DeepSeek-Janus-Pro as the multi-modal LLM - ColPali as the vision encoder - Qdrant as the vector database - Streamlit as the web interface ## Demo A demo of the project is available below: ![demo](./video-demo.mp4) --- ## Setup and installations **Setup Janus**: ``` git clone https://github.com/deepseek-ai/Janus.git pip install -e ./Janus ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit fastembed flash-attn transformers ``` --- ## Run the project Finally, run the project by running the following command: ```bash streamlit run app.py ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "deepseek-multimodal-RAG/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/deepseek-multimodal-RAG/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1351 }
# Local ChatGPT with thinking UI This project leverages DeepSeek-R1 and Streamlit to create a 100% locally running mini-ChatGPT app. ## Installation and setup **Setup Ollama**: ```bash # setup ollama on linux curl -fsSL https://ollama.com/install.sh | sh # pull the DeepSeek-R1 model ollama pull deepseek-r1 ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit ollama ``` **Run the app**: Run the streamlit app as follows: ```bash streamlit run app.py -w ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "deepseek-thinking-ui/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/deepseek-thinking-ui/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1138 }
<a target="_blank" href="https://lightning.ai/akshay-ddods/studios/rag-using-llama-3-3-by-meta-ai"> <img src="https://pl-bolts-doc-images.s3.us-east-2.amazonaws.com/app-2/studio-badge.svg" alt="Open In Studio"/> </a> # LLama3.3-RAG application This project leverages a locally Llama 3.3 to build a RAG application to **chat with your docs** and Streamlit to build the UI. ## Demo Watch the demo video: [![Watch the video](https://github.com/patchy631/ai-engineering-hub/blob/main/document-chat-rag/resources/thumbnail.png)](https://www.youtube.com/watch?v=ZgNJMWipirk) ## Installation and setup **Setup Ollama**: ```bash # setup ollama on linux curl -fsSL https://ollama.com/install.sh | sh # pull llama 3.3:70B ollama pull llama3.3 ``` **Setup Qdrant VectorDB** ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit ollama llama-index-vector-stores-qdrant ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "document-chat-rag/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/document-chat-rag/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1652 }
# Evaluation and Observability for AI Apps This project leverages [CometML Opik](https://github.com/comet-ml/opik) to build an e2e evaluation and observability pipeline for a RAG application. ## Installation and setup **Get API Keys**: - [Opik API Key](https://www.comet.com/signup) - [Open AI API Key](https://platform.openai.com/api-keys) **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install opik llama-index llama-index-agent-openai llama-index-llms-openai --upgrade --quiet ``` **Running**: You can run the code in notebook ```demo.ipynb```. --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "eval-and-observability/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/eval-and-observability/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1189 }
# LLama3.3-RAG application This project build the fastest stack to build a RAG application to **chat with your docs**. We use: - SambaNova as the inference engine for Llama 3.3. - Llama index for orchestrating the RAG app. - Qdrant VectorDB for storing the embeddings. - Streamlit to build the UI. ## Installation and setup **Setup SambaNova**: Get an API key from [SambaNova](https://sambanova.ai/) and set it in the `.env` file as follows: ```bash SAMBANOVA_API_KEY=<YOUR_SAMBANOVA_API_KEY> ``` **Setup Qdrant VectorDB** ```bash docker run -p 6333:6333 -p 6334:6334 \ -v $(pwd)/qdrant_storage:/qdrant/storage:z \ qdrant/qdrant ``` **Install Dependencies**: Ensure you have Python 3.11 or later installed. ```bash pip install streamlit llama-index-vector-stores-qdrant llama-index-llms-sambanovasystems sseclient-py ``` **Run the app**: Run the app by running the following command: ```bash streamlit run app.py ``` --- ## 📬 Stay Updated with Our Newsletter! **Get a FREE Data Science eBook** 📖 with 150+ essential lessons in Data Science when you subscribe to our newsletter! Stay in the loop with the latest tutorials, insights, and exclusive resources. [Subscribe now!](https://join.dailydoseofds.com) [![Daily Dose of Data Science Newsletter](https://github.com/patchy631/ai-engineering/blob/main/resources/join_ddods.png)](https://join.dailydoseofds.com) --- ## Contribution Contributions are welcome! Please fork the repository and submit a pull request with your improvements.
{ "source": "patchy631/ai-engineering-hub", "title": "fastest-rag-stack/README.md", "url": "https://github.com/patchy631/ai-engineering-hub/blob/main/fastest-rag-stack/README.md", "date": "2024-10-21T10:43:24", "stars": 2930, "description": null, "file_size": 1540 }