text
stringlengths 55
456k
| metadata
dict |
---|---|
[b](b)
[b.md](b.md)
[./b.md](./b.md)
[/b.md](/b.md) `[/b.md](/b.md)`
[b#header1](b#header1)
```
[b](b)
```
~~~
[b](b)
~~~
// Indented code
[b](b) | {
"source": "voideditor/void",
"title": "extensions/markdown-language-features/test-workspace/a.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/markdown-language-features/test-workspace/a.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 160
} |
The file `TypeScript.tmLanguage.json` and `TypeScriptReact.tmLanguage.json` are derived from [TypeScript.tmLanguage](https://github.com/microsoft/TypeScript-TmLanguage/blob/master/TypeScript.tmLanguage) and [TypeScriptReact.tmLanguage](https://github.com/microsoft/TypeScript-TmLanguage/blob/master/TypeScriptReact.tmLanguage).
To update to the latest version:
- `cd extensions/typescript` and run `npm run update-grammars`
- don't forget to run the integration tests at `./scripts/test-integration.sh`
Migration notes and todos:
- differentiate variable and function declarations from references
- I suggest we use a new scope segment 'function-call' to signal a function reference, and 'definition' to the declaration. An alternative is to use 'support.function' everywhere.
- I suggest we use a new scope segment 'definition' to the variable declarations. Haven't yet found a scope for references that other grammars use.
- rename scope to return.type to return-type, which is already used in other grammars
- rename entity.name.class to entity.name.type.class which is used in all other grammars I've seen
- do we really want to have the list of all the 'library' types (Math, Dom...). It adds a lot of size to the grammar, lots of special rules and is not really correct as it depends on the JavaScript runtime which types are present. | {
"source": "voideditor/void",
"title": "extensions/typescript-basics/syntaxes/Readme.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/typescript-basics/syntaxes/Readme.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 1350
} |
# vscode-wasm-typescript
Language server host for typescript using vscode's sync-api in the browser.
## Getting up and running
To test this out, you'll need three shells:
1. `npm i` for vscode itself
2. `npm run watch-web` for the web side
3. `node <root>/scripts/code-web.js --coi`
The last command will open a browser window. You'll want to add `?vscode-coi=`
to the end. This is for enabling shared array buffers. So, for example:
`http://localhost:8080/?vscode-coi=`.
### Working on type acquisition
In order to work with web's new type acquisition, you'll need to enable
`TypeScript > Experimental > Tsserver > Web: Enable Project Wide Intellisense`
in your VS Code options (`Ctrl-,`), you may need to reload the page.
This happens when working in a regular `.js` file on a dependency without
declared types. You should be able to open `file.js` and write something like
`import lodash from 'lodash';` at the top of the file and, after a moment, get
types and other intellisense features (like Go To Def/Source Def) working as
expected. This scenario works off Tsserver's own Automatic Type Acquisition
capabilities, and simulates a "global" types cache stored at
`/vscode-global-typings/ts-nul-authority/project`, which is backed by an
in-memory `MemFs` `FileSystemProvider`.
### Simulated `node_modules`
For regular `.ts` files, instead of going through Tsserver's type acquisition,
a separate `AutoInstallerFs` is used to create a "virtual" `node_modules` that
extracts desired packages on demand, to an underlying `MemFs`. This will
happen any time a filesystem operation is done inside a `node_modules` folder
across any project in the workspace, and will use the "real" `package.json`
(and, if present, `package-lock.json`) to resolve the dependency tree.
A fallback is then set up such that when a URI like
`memfs:/path/to/node_modules/lodash/lodash.d.ts` is accessed, that gets
redirected to
`vscode-node-modules:/ts-nul-authority/memfs/ts-nul-authority/path/to/node_modules/lodash/lodash.d.ts`,
which will be sent to the `AutoInstallerFs`. | {
"source": "voideditor/void",
"title": "extensions/typescript-language-features/web/README.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/typescript-language-features/web/README.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 2064
} |
# Integration test
## Compile
Make sure to run the following commands to compile and install dependencies:
cd test/integration/browser
npm i
npm run compile
## Run (inside Electron)
scripts/test-integration.[sh|bat]
All integration tests run in an Electron instance. You can specify to run the tests against a real build by setting the environment variables `INTEGRATION_TEST_ELECTRON_PATH` and `VSCODE_REMOTE_SERVER_PATH` (if you want to include remote tests).
## Run (inside browser)
scripts/test-web-integration.[sh|bat] --browser [chromium|webkit] [--debug]
All integration tests run in a browser instance as specified by the command line arguments.
Add the `--debug` flag to see a browser window with the tests running.
**Note**: you can enable verbose logging of playwright library by setting a `DEBUG` environment variable before running the tests (<https://playwright.dev/docs/debug#verbose-api-logs>)
## Debug
All integration tests can be run and debugged from within VSCode (both Electron and Web) simply by selecting the related launch configuration and running them. | {
"source": "voideditor/void",
"title": "test/integration/browser/README.md",
"url": "https://github.com/voideditor/void/blob/main/test/integration/browser/README.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 1111
} |
# First
# Second
[b](/b.md)
[b](../b.md)
[b](./../b.md) | {
"source": "voideditor/void",
"title": "extensions/markdown-language-features/test-workspace/sub/c.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/markdown-language-features/test-workspace/sub/c.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 56
} |
<!-- Should highlight math blocks -->
$$
\theta
$$
**md**
$$
\theta{ % comment
$$
**md**
$$
\relax{x}{1} = \int_{-\infty}^\infty
\hat\xi\,e^{2 \pi i \xi x}
\,d\xi % comment
$$
**md**
$
x = 1.1 \int_{a}
$
**md**
$
\begin{smallmatrix}
1 & 2 \\
4 & 3
\end{smallmatrix}
$
$
x = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + \frac{1}{a_3 + a_4}}}
$
$
\displaystyle {1 + \frac{q^2}{(1-q)}+\frac{q^6}{(1-q)(1-q^2)}+\cdots }= \prod_{j=0}^{\infty}\frac{1}{(1-q^{5j+2})(1-q^{5j+3})}, \quad\quad \text{for }\lvert q\rvert<1.
$
<!-- Should highlight inline -->
a **a** $$ \theta $$ aa a **a**
a **a** $ \theta $ aa a **a**
$ \theta $
$$ 1 \theta 1 1 $$
<!-- Should not highlight inline cases without whitespace -->
$10 $20
**a** $10 $20 **a**
**a** a $10 $20 a **a**
a **a**$ \theta $aa a **a**
a **a**$$ \theta $$aa a **a**
<!-- Should be disabled in comments -->
<!--
$$
\theta % comment
$$
-->
<!-- Should be disabled in fenced code blocks -->
```txt
$$
\displaystyle
\left( \sum_{k=1}^n a_k b_k \right)^2
\leq
\left( \sum_{k=1}^n a_k^2 \right)
\left( \sum_{k=1}^n b_k^2 \right)
$$
```
<!-- #128411 -->
- list item
**abc**
$$
\begin{aligned}
&\text{Any equation}
\\
&\text {Inconsistent KaTeX keyword highlighting}
\end{aligned}
$$
**xyz**
<!-- Support both \text{stuff} and \text {stuff} -->
$$
\text{stuff}
\text {stuff}
$$
<!-- Should not highlight inside of raw code block -->
$$
\frac{1}{2}
$$
<!-- Should highlight leading and trailing equations on same line -->
$$ \vec{a}
\vec{a}
\vec{a} $$
**md**
$ \vec{a}
\vec{a}
\vec{a} $
**md**
\vec{a}
**md**
$ \vec{a}
\vec{a}
= [2, 3] $
<!-- Should highlight inline blocks -->
a **b** $$
**b**
**md**
a **b** $$
\frac{1}{2}
$$
**b**
**p**
a **b**
$$
\frac{1}{2}
$$
**b**
<!-- Should allow inline code to be followed by non word character #136584 -->
Should be highlighted $\frac{1}{2}$.
Should not be highlighted $\frac{1}{2}$text
Should not be highlighted $\frac{1}{2}$10
<!-- Should not highlight dollar amount at start of line #136535 -->
$12.45
$12.45 x
x $12.45
<!-- Should not interpret text for skipped percent (\%) -->
$$ \% Should not be highlighted $$ | {
"source": "voideditor/void",
"title": "extensions/vscode-colorize-tests/test/colorize-fixtures/md-math.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/vscode-colorize-tests/test/colorize-fixtures/md-math.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 2257
} |
# h
<pre><code>
# a
</code></pre>
# h
<pre>
# a
a</pre>
# h | {
"source": "voideditor/void",
"title": "extensions/vscode-colorize-tests/test/colorize-fixtures/test-33886.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/vscode-colorize-tests/test/colorize-fixtures/test-33886.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 63
} |
# Header 1 #
## Header 2 ##
### Header 3 ### (Hashes on right are optional)
## Markdown plus h2 with a custom ID ## {#id-goes-here}
[Link back to H2](#id-goes-here)
### Alternate heading styles:
Alternate Header 1
==================
Alternate Header 2
------------------
<!-- html madness -->
<div class="custom-class" markdown="1">
<div>
nested div
</div>
<script type='text/x-koka'>
function( x: int ) { return x*x; }
</script>
This is a div _with_ underscores
and a & <b class="bold">bold</b> element.
<style>
body { font: "Consolas" }
</style>
</div>
* Bullet lists are easy too
- Another one
+ Another one
+ nested list
This is a paragraph, which is text surrounded by
whitespace. Paragraphs can be on one
line (or many), and can drone on for hours.
Now some inline markup like _italics_, **bold**,
and `code()`. Note that underscores
in_words_are ignored.
````application/json
{ value: ["or with a mime type"] }
````
> Blockquotes are like quoted text in email replies
>> And, they can be nested
1. A numbered list
> Block quotes in list
2. Which is numbered
3. With periods and a space
And now some code:
// Code is just text indented a bit
which(is_easy) to_remember();
And a block
~~~
// Markdown extra adds un-indented code blocks too
if (this_is_more_code == true && !indented) {
// tild wrapped code blocks, also not indented
}
~~~
Text with
two trailing spaces
(on the right)
can be used
for things like poems
### Horizontal rules
* * * *
****
--------------------------

## Markdown plus tables ##
| Header | Header | Right |
| ------ | ------ | -----: |
| Cell | Cell | $10 |
| Cell | Cell | $20 |
* Outer pipes on tables are optional
* Colon used for alignment (right versus left)
## Markdown plus definition lists ##
Bottled water
: $ 1.25
: $ 1.55 (Large)
Milk
Pop
: $ 1.75
* Multiple definitions and terms are possible
* Definitions can include multiple paragraphs too
*[ABBR]: Markdown plus abbreviations (produces an <abbr> tag) | {
"source": "voideditor/void",
"title": "extensions/vscode-colorize-tests/test/colorize-fixtures/test.md",
"url": "https://github.com/voideditor/void/blob/main/extensions/vscode-colorize-tests/test/colorize-fixtures/test.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 2109
} |
*italics*, **bold**, ``literal``.
1. A list
2. With items
- With sub-lists ...
- ... of things.
3. Other things
definition list
A list of terms and their definition
Literal block::
x = 2 + 3
Section separators are all interchangeable.
=====
Title
=====
--------
Subtitle
--------
Section 1
=========
Section 2
---------
Section 3
~~~~~~~~~
| Keeping line
| breaks.
+-------------+--------------+
| Fancy table | with columns |
+=============+==============+
| row 1, col 1| row 1, col 2 |
+-------------+--------------+
============ ============
Simple table with columns
============ ============
row 1, col1 row 1, col 2
============ ============
Block quote is indented.
This space intentionally not important.
Doctest block
>>> 2 +3
5
A footnote [#note]_.
.. [#note] https://docutils.sourceforge.io/docs/ref/rst/restructuredtext.html#footnotes
Citation [cite]_.
.. [cite] https://bing.com
a simple link_.
A `fancier link`_ .
.. _link: https://docutils.sourceforge.io/
.. _fancier link: https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html
An `inline link <https://code.visualstudio.com>`__ .
.. image:: https://code.visualstudio.com/assets/images/code-stable.png
.. function: example()
:module: mod
:sub:`subscript`
:sup:`superscript`
.. This is a comment.
..
And a bigger,
longer comment.
A |subst| of something.
.. |subst| replace:: substitution | {
"source": "voideditor/void",
"title": "extensions/vscode-colorize-tests/test/colorize-fixtures/test.rst",
"url": "https://github.com/voideditor/void/blob/main/extensions/vscode-colorize-tests/test/colorize-fixtures/test.rst",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 1431
} |
Tabstops
--
With tabstops you can make the editor cursor move inside a snippet. Use `$1`, `$2` to specify cursor locations. The number is the order in which tabstops will be visited, whereas `$0` denotes the final cursor position. Multiple tabstops are linked and updated in sync.
Placeholders
--
Placeholders are tabstops with values, like `${1:foo}`. The placeholder text will be inserted and selected such that it can be easily changed. Placeholders can nested, like `${1:another ${2:placeholder}}`.
Choice
--
Placeholders can have choices as values. The syntax is a comma-separated enumeration of values, enclosed with the pipe-character, e.g. `${1|one,two,three|}`. When inserted and selected choices will prompt the user to pick one of the values.
Variables
--
With `$name` or `${name:default}` you can insert the value of a variable. When a variable isn't set its *default* or the empty string is inserted. When a variable is unknown (that is, its name isn't defined) the name of the variable is inserted and it is transformed into a placeholder. The following variables can be used:
* `TM_SELECTED_TEXT` The currently selected text or the empty string
* `TM_CURRENT_LINE` The contents of the current line
* `TM_CURRENT_WORD` The contents of the word under cursor or the empty string
* `TM_LINE_INDEX` The zero-index based line number
* `TM_LINE_NUMBER` The one-index based line number
* `TM_FILENAME` The filename of the current document
* `TM_FILENAME_BASE` The filename of the current document without its extensions
* `TM_DIRECTORY` The directory of the current document
* `TM_FILEPATH` The full file path of the current document
* `RELATIVE_FILEPATH` The relative (to the opened workspace or folder) file path of the current document
* `CLIPBOARD` The contents of your clipboard
* `WORKSPACE_NAME` The name of the opened workspace or folder
* `WORKSPACE_FOLDER` The path of the opened workspace or folder
For inserting the current date and time:
* `CURRENT_YEAR` The current year
* `CURRENT_YEAR_SHORT` The current year's last two digits
* `CURRENT_MONTH` The month as two digits (example '02')
* `CURRENT_MONTH_NAME` The full name of the month (example 'July')
* `CURRENT_MONTH_NAME_SHORT` The short name of the month (example 'Jul')
* `CURRENT_DATE` The day of the month
* `CURRENT_DAY_NAME` The name of day (example 'Monday')
* `CURRENT_DAY_NAME_SHORT` The short name of the day (example 'Mon')
* `CURRENT_HOUR` The current hour in 24-hour clock format
* `CURRENT_MINUTE` The current minute
* `CURRENT_SECOND` The current second
* `CURRENT_SECONDS_UNIX` The number of seconds since the Unix epoch
For inserting random values:
* `RANDOM` 6 random Base-10 digits
* `RANDOM_HEX` 6 random Base-16 digits
* `UUID` A Version 4 UUID
Variable-Transform
--
Transformations allow to modify the value of a variable before it is being inserted. The definition of a transformation consists of three parts:
1. A regular expression that is matched against the value of a variable, or the empty string when the variable cannot be resolved.
2. A "format string" that allows to reference matching groups from the regular expression. The format string allows for conditional inserts and simple modifications.
3. Options that are passed to the regular expression
The following sample inserts the name of the current file without its ending, so from `foo.txt` it makes `foo`.
```
${TM_FILENAME/(.*)\..+$/$1/}
| | | |
| | | |-> no options
| | |
| | |-> references the contents of the first
| | capture group
| |
| |-> regex to capture everything before
| the final `.suffix`
|
|-> resolves to the filename
```
Placeholder-Transform
--
Like a Variable-Transform, a transformation of a placeholder allows changing the inserted text for the placeholder when moving to the next tab stop.
The inserted text is matched with the regular expression and the match or matches - depending on the options - are replaced with the specified replacement format text.
Every occurrence of a placeholder can define its own transformation independently using the value of the first placeholder.
The format for Placeholder-Transforms is the same as for Variable-Transforms.
The following sample removes an underscore at the beginning of the text. `_transform` becomes `transform`.
```
${1/^_(.*)/$1/}
| | | |-> No options
| | |
| | |-> Replace it with the first capture group
| |
| |-> Regular expression to capture everything after the underscore
|
|-> Placeholder Index
```
Grammar
--
Below is the EBNF for snippets.
```
any ::= tabstop | placeholder | choice | variable | text
tabstop ::= '$' int
| '${' int '}'
| '${' int transform '}'
placeholder ::= '${' int ':' any '}'
choice ::= '${' int '|' text (',' text)* '|}'
variable ::= '$' var | '${' var }'
| '${' var ':' any '}'
| '${' var transform '}'
transform ::= '/' regex '/' (format | text)+ '/' options
format ::= '$' int | '${' int '}'
| '${' int ':' '/upcase' | '/downcase' | '/capitalize' | '/camelcase' | '/pascalcase' '}'
| '${' int ':+' if '}'
| '${' int ':?' if ':' else '}'
| '${' int ':-' else '}' | '${' int ':' else '}'
regex ::= JavaScript Regular Expression value (ctor-string)
options ::= JavaScript Regular Expression option (ctor-options)
var ::= [_a-zA-Z] [_a-zA-Z0-9]*
int ::= [0-9]+
text ::= .*
```
Escaping is done with with the `\` (backslash) character. The rule of thumb is that you can escape characters that otherwise would have a syntactic meaning, e.g within text you can escape `$`, `}` and `\`, within choice elements you can escape `|`, `,` and `\`, and within transform elements you can escape `/` and `\`. Also note that in JSON you need to escape `\` as `\\`. | {
"source": "voideditor/void",
"title": "src/vs/editor/contrib/snippet/browser/snippet.md",
"url": "https://github.com/voideditor/void/blob/main/src/vs/editor/contrib/snippet/browser/snippet.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 6021
} |
# Diffing Fixture Tests
Every folder in `fixtures` represents a test.
The file that starts with `1.` is diffed against the file that starts with `2.`. Use `tst` instead of `ts` to avoid compiler/linter errors for typescript diff files.
* Missing `*.expected.diff.json` are created automatically (as well as an `*.invalid.diff.json` file).
* If the actual diff does not equal the expected diff, the expected file is updated automatically. The previous value of the expected file is written to `*.invalid.diff.json`.
* The test will fail if there are any `*.invalid.diff.json` files. This makes sure that the test keeps failing even if it is run a second time.
When changing the diffing algorithm, run the fixture tests, review the diff of the `*.expected.diff.json` files and delete all `*.invalid.diff.json` files. | {
"source": "voideditor/void",
"title": "src/vs/editor/test/node/diffing/README.md",
"url": "https://github.com/voideditor/void/blob/main/src/vs/editor/test/node/diffing/README.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 817
} |
Run `node build.js` to compile the React into `out/`.
A couple things to remember:
- Make sure to add .js at the end of any external imports used in here, e.g. ../../../../../my_file.js. If you don't do this, you will get untraceable errors.
- src/ needs to be shallow (1 folder deep) so the detection of externals works properly (see tsup.config.js). | {
"source": "voideditor/void",
"title": "src/vs/workbench/contrib/void/browser/react/README.md",
"url": "https://github.com/voideditor/void/blob/main/src/vs/workbench/contrib/void/browser/react/README.md",
"date": "2024-09-11T02:37:00",
"stars": 10433,
"description": null,
"file_size": 354
} |
# F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching
[](https://github.com/SWivid/F5-TTS)
[](https://arxiv.org/abs/2410.06885)
[](https://swivid.github.io/F5-TTS/)
[](https://huggingface.co/spaces/mrfakename/E2-F5-TTS)
[](https://modelscope.cn/studios/modelscope/E2-F5-TTS)
[](https://x-lance.sjtu.edu.cn/)
[](https://www.pcl.ac.cn)
<!-- <img src="https://github.com/user-attachments/assets/12d7749c-071a-427c-81bf-b87b91def670" alt="Watermark" style="width: 40px; height: auto"> -->
**F5-TTS**: Diffusion Transformer with ConvNeXt V2, faster trained and inference.
**E2 TTS**: Flat-UNet Transformer, closest reproduction from [paper](https://arxiv.org/abs/2406.18009).
**Sway Sampling**: Inference-time flow step sampling strategy, greatly improves performance
### Thanks to all the contributors !
## News
- **2024/10/08**: F5-TTS & E2 TTS base models on [🤗 Hugging Face](https://huggingface.co/SWivid/F5-TTS), [🤖 Model Scope](https://www.modelscope.cn/models/SWivid/F5-TTS_Emilia-ZH-EN), [🟣 Wisemodel](https://wisemodel.cn/models/SJTU_X-LANCE/F5-TTS_Emilia-ZH-EN).
## Installation
### Create a separate environment if needed
```bash
# Create a python 3.10 conda env (you could also use virtualenv)
conda create -n f5-tts python=3.10
conda activate f5-tts
```
### Install PyTorch with matched device
<details>
<summary>NVIDIA GPU</summary>
> ```bash
> # Install pytorch with your CUDA version, e.g.
> pip install torch==2.3.0+cu118 torchaudio==2.3.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
> ```
</details>
<details>
<summary>AMD GPU</summary>
> ```bash
> # Install pytorch with your ROCm version (Linux only), e.g.
> pip install torch==2.5.1+rocm6.2 torchaudio==2.5.1+rocm6.2 --extra-index-url https://download.pytorch.org/whl/rocm6.2
> ```
</details>
<details>
<summary>Intel GPU</summary>
> ```bash
> # Install pytorch with your XPU version, e.g.
> # Intel® Deep Learning Essentials or Intel® oneAPI Base Toolkit must be installed
> pip install torch torchaudio --index-url https://download.pytorch.org/whl/test/xpu
>
> # Intel GPU support is also available through IPEX (Intel® Extension for PyTorch)
> # IPEX does not require the Intel® Deep Learning Essentials or Intel® oneAPI Base Toolkit
> # See: https://pytorch-extension.intel.com/installation?request=platform
> ```
</details>
<details>
<summary>Apple Silicon</summary>
> ```bash
> # Install the stable pytorch, e.g.
> pip install torch torchaudio
> ```
</details>
### Then you can choose one from below:
> ### 1. As a pip package (if just for inference)
>
> ```bash
> pip install git+https://github.com/SWivid/F5-TTS.git
> ```
>
> ### 2. Local editable (if also do training, finetuning)
>
> ```bash
> git clone https://github.com/SWivid/F5-TTS.git
> cd F5-TTS
> # git submodule update --init --recursive # (optional, if need > bigvgan)
> pip install -e .
> ```
### Docker usage also available
```bash
# Build from Dockerfile
docker build -t f5tts:v1 .
# Or pull from GitHub Container Registry
docker pull ghcr.io/swivid/f5-tts:main
```
## Inference
### 1. Gradio App
Currently supported features:
- Basic TTS with Chunk Inference
- Multi-Style / Multi-Speaker Generation
- Voice Chat powered by Qwen2.5-3B-Instruct
- [Custom inference with more language support](src/f5_tts/infer/SHARED.md)
```bash
# Launch a Gradio app (web interface)
f5-tts_infer-gradio
# Specify the port/host
f5-tts_infer-gradio --port 7860 --host 0.0.0.0
# Launch a share link
f5-tts_infer-gradio --share
```
<details>
<summary>NVIDIA device docker compose file example</summary>
```yaml
services:
f5-tts:
image: ghcr.io/swivid/f5-tts:main
ports:
- "7860:7860"
environment:
GRADIO_SERVER_PORT: 7860
entrypoint: ["f5-tts_infer-gradio", "--port", "7860", "--host", "0.0.0.0"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
volumes:
f5-tts:
driver: local
```
</details>
### 2. CLI Inference
```bash
# Run with flags
# Leave --ref_text "" will have ASR model transcribe (extra GPU memory usage)
f5-tts_infer-cli \
--model "F5-TTS" \
--ref_audio "ref_audio.wav" \
--ref_text "The content, subtitle or transcription of reference audio." \
--gen_text "Some text you want TTS model generate for you."
# Run with default setting. src/f5_tts/infer/examples/basic/basic.toml
f5-tts_infer-cli
# Or with your own .toml file
f5-tts_infer-cli -c custom.toml
# Multi voice. See src/f5_tts/infer/README.md
f5-tts_infer-cli -c src/f5_tts/infer/examples/multi/story.toml
```
### 3. More instructions
- In order to have better generation results, take a moment to read [detailed guidance](src/f5_tts/infer).
- The [Issues](https://github.com/SWivid/F5-TTS/issues?q=is%3Aissue) are very useful, please try to find the solution by properly searching the keywords of problem encountered. If no answer found, then feel free to open an issue.
## Training
### 1. Gradio App
Read [training & finetuning guidance](src/f5_tts/train) for more instructions.
```bash
# Quick start with Gradio web interface
f5-tts_finetune-gradio
```
## [Evaluation](src/f5_tts/eval)
## Development
Use pre-commit to ensure code quality (will run linters and formatters automatically)
```bash
pip install pre-commit
pre-commit install
```
When making a pull request, before each commit, run:
```bash
pre-commit run --all-files
```
Note: Some model components have linting exceptions for E722 to accommodate tensor notation
## Acknowledgements
- [E2-TTS](https://arxiv.org/abs/2406.18009) brilliant work, simple and effective
- [Emilia](https://arxiv.org/abs/2407.05361), [WenetSpeech4TTS](https://arxiv.org/abs/2406.05763), [LibriTTS](https://arxiv.org/abs/1904.02882), [LJSpeech](https://keithito.com/LJ-Speech-Dataset/) valuable datasets
- [lucidrains](https://github.com/lucidrains) initial CFM structure with also [bfs18](https://github.com/bfs18) for discussion
- [SD3](https://arxiv.org/abs/2403.03206) & [Hugging Face diffusers](https://github.com/huggingface/diffusers) DiT and MMDiT code structure
- [torchdiffeq](https://github.com/rtqichen/torchdiffeq) as ODE solver, [Vocos](https://huggingface.co/charactr/vocos-mel-24khz) and [BigVGAN](https://github.com/NVIDIA/BigVGAN) as vocoder
- [FunASR](https://github.com/modelscope/FunASR), [faster-whisper](https://github.com/SYSTRAN/faster-whisper), [UniSpeech](https://github.com/microsoft/UniSpeech), [SpeechMOS](https://github.com/tarepan/SpeechMOS) for evaluation tools
- [ctc-forced-aligner](https://github.com/MahmoudAshraf97/ctc-forced-aligner) for speech edit test
- [mrfakename](https://x.com/realmrfakename) huggingface space demo ~
- [f5-tts-mlx](https://github.com/lucasnewman/f5-tts-mlx/tree/main) Implementation with MLX framework by [Lucas Newman](https://github.com/lucasnewman)
- [F5-TTS-ONNX](https://github.com/DakeQQ/F5-TTS-ONNX) ONNX Runtime version by [DakeQQ](https://github.com/DakeQQ)
## Citation
If our work and codebase is useful for you, please cite as:
```
@article{chen-etal-2024-f5tts,
title={F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching},
author={Yushen Chen and Zhikang Niu and Ziyang Ma and Keqi Deng and Chunhui Wang and Jian Zhao and Kai Yu and Xie Chen},
journal={arXiv preprint arXiv:2410.06885},
year={2024},
}
```
## License
Our code is released under MIT License. The pre-trained models are licensed under the CC-BY-NC license due to the training data Emilia, which is an in-the-wild dataset. Sorry for any inconvenience this may cause. | {
"source": "SWivid/F5-TTS",
"title": "README.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/README.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 8127
} |
Pretrained model ckpts. https://huggingface.co/SWivid/F5-TTS
```
ckpts/
E2TTS_Base/
model_1200000.pt
F5TTS_Base/
model_1200000.pt
``` | {
"source": "SWivid/F5-TTS",
"title": "ckpts/README.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/ckpts/README.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 158
} |
# Evaluation
Install packages for evaluation:
```bash
pip install -e .[eval]
```
## Generating Samples for Evaluation
### Prepare Test Datasets
1. *Seed-TTS testset*: Download from [seed-tts-eval](https://github.com/BytedanceSpeech/seed-tts-eval).
2. *LibriSpeech test-clean*: Download from [OpenSLR](http://www.openslr.org/12/).
3. Unzip the downloaded datasets and place them in the `data/` directory.
4. Update the path for *LibriSpeech test-clean* data in `src/f5_tts/eval/eval_infer_batch.py`
5. Our filtered LibriSpeech-PC 4-10s subset: `data/librispeech_pc_test_clean_cross_sentence.lst`
### Batch Inference for Test Set
To run batch inference for evaluations, execute the following commands:
```bash
# batch inference for evaluations
accelerate config # if not set before
bash src/f5_tts/eval/eval_infer_batch.sh
```
## Objective Evaluation on Generated Results
### Download Evaluation Model Checkpoints
1. Chinese ASR Model: [Paraformer-zh](https://huggingface.co/funasr/paraformer-zh)
2. English ASR Model: [Faster-Whisper](https://huggingface.co/Systran/faster-whisper-large-v3)
3. WavLM Model: Download from [Google Drive](https://drive.google.com/file/d/1-aE1NfzpRCLxA4GUxX9ITI3F9LlbtEGP/view).
Then update in the following scripts with the paths you put evaluation model ckpts to.
### Objective Evaluation
Update the path with your batch-inferenced results, and carry out WER / SIM / UTMOS evaluations:
```bash
# Evaluation [WER] for Seed-TTS test [ZH] set
python src/f5_tts/eval/eval_seedtts_testset.py --eval_task wer --lang zh --gen_wav_dir <GEN_WAV_DIR> --gpu_nums 8
# Evaluation [SIM] for LibriSpeech-PC test-clean (cross-sentence)
python src/f5_tts/eval/eval_librispeech_test_clean.py --eval_task sim --gen_wav_dir <GEN_WAV_DIR> --librispeech_test_clean_path <TEST_CLEAN_PATH>
# Evaluation [UTMOS]. --ext: Audio extension
python src/f5_tts/eval/eval_utmos.py --audio_dir <WAV_DIR> --ext wav
``` | {
"source": "SWivid/F5-TTS",
"title": "src/f5_tts/eval/README.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/eval/README.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 1931
} |
# Inference
The pretrained model checkpoints can be reached at [🤗 Hugging Face](https://huggingface.co/SWivid/F5-TTS) and [🤖 Model Scope](https://www.modelscope.cn/models/SWivid/F5-TTS_Emilia-ZH-EN), or will be automatically downloaded when running inference scripts.
**More checkpoints with whole community efforts can be found in [SHARED.md](SHARED.md), supporting more languages.**
Currently support **30s for a single** generation, which is the **total length** including both prompt and output audio. However, you can provide `infer_cli` and `infer_gradio` with longer text, will automatically do chunk generation. Long reference audio will be **clip short to ~15s**.
To avoid possible inference failures, make sure you have seen through the following instructions.
- Use reference audio <15s and leave some silence (e.g. 1s) at the end. Otherwise there is a risk of truncating in the middle of word, leading to suboptimal generation.
- Uppercased letters will be uttered letter by letter, so use lowercased letters for normal words.
- Add some spaces (blank: " ") or punctuations (e.g. "," ".") to explicitly introduce some pauses.
- Preprocess numbers to Chinese letters if you want to have them read in Chinese, otherwise in English.
- If the generation output is blank (pure silence), check for ffmpeg installation (various tutorials online, blogs, videos, etc.).
- Try turn off use_ema if using an early-stage finetuned checkpoint (which goes just few updates).
## Gradio App
Currently supported features:
- Basic TTS with Chunk Inference
- Multi-Style / Multi-Speaker Generation
- Voice Chat powered by Qwen2.5-3B-Instruct
- [Custom inference with more language support](src/f5_tts/infer/SHARED.md)
The cli command `f5-tts_infer-gradio` equals to `python src/f5_tts/infer/infer_gradio.py`, which launches a Gradio APP (web interface) for inference.
The script will load model checkpoints from Huggingface. You can also manually download files and update the path to `load_model()` in `infer_gradio.py`. Currently only load TTS models first, will load ASR model to do transcription if `ref_text` not provided, will load LLM model if use Voice Chat.
More flags options:
```bash
# Automatically launch the interface in the default web browser
f5-tts_infer-gradio --inbrowser
# Set the root path of the application, if it's not served from the root ("/") of the domain
# For example, if the application is served at "https://example.com/myapp"
f5-tts_infer-gradio --root_path "/myapp"
```
Could also be used as a component for larger application:
```python
import gradio as gr
from f5_tts.infer.infer_gradio import app
with gr.Blocks() as main_app:
gr.Markdown("# This is an example of using F5-TTS within a bigger Gradio app")
# ... other Gradio components
app.render()
main_app.launch()
```
## CLI Inference
The cli command `f5-tts_infer-cli` equals to `python src/f5_tts/infer/infer_cli.py`, which is a command line tool for inference.
The script will load model checkpoints from Huggingface. You can also manually download files and use `--ckpt_file` to specify the model you want to load, or directly update in `infer_cli.py`.
For change vocab.txt use `--vocab_file` to provide your `vocab.txt` file.
Basically you can inference with flags:
```bash
# Leave --ref_text "" will have ASR model transcribe (extra GPU memory usage)
f5-tts_infer-cli \
--model "F5-TTS" \
--ref_audio "ref_audio.wav" \
--ref_text "The content, subtitle or transcription of reference audio." \
--gen_text "Some text you want TTS model generate for you."
# Choose Vocoder
f5-tts_infer-cli --vocoder_name bigvgan --load_vocoder_from_local --ckpt_file <YOUR_CKPT_PATH, eg:ckpts/F5TTS_Base_bigvgan/model_1250000.pt>
f5-tts_infer-cli --vocoder_name vocos --load_vocoder_from_local --ckpt_file <YOUR_CKPT_PATH, eg:ckpts/F5TTS_Base/model_1200000.safetensors>
# More instructions
f5-tts_infer-cli --help
```
And a `.toml` file would help with more flexible usage.
```bash
f5-tts_infer-cli -c custom.toml
```
For example, you can use `.toml` to pass in variables, refer to `src/f5_tts/infer/examples/basic/basic.toml`:
```toml
# F5-TTS | E2-TTS
model = "F5-TTS"
ref_audio = "infer/examples/basic/basic_ref_en.wav"
# If an empty "", transcribes the reference audio automatically.
ref_text = "Some call me nature, others call me mother nature."
gen_text = "I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring."
# File with text to generate. Ignores the text above.
gen_file = ""
remove_silence = false
output_dir = "tests"
```
You can also leverage `.toml` file to do multi-style generation, refer to `src/f5_tts/infer/examples/multi/story.toml`.
```toml
# F5-TTS | E2-TTS
model = "F5-TTS"
ref_audio = "infer/examples/multi/main.flac"
# If an empty "", transcribes the reference audio automatically.
ref_text = ""
gen_text = ""
# File with text to generate. Ignores the text above.
gen_file = "infer/examples/multi/story.txt"
remove_silence = true
output_dir = "tests"
[voices.town]
ref_audio = "infer/examples/multi/town.flac"
ref_text = ""
[voices.country]
ref_audio = "infer/examples/multi/country.flac"
ref_text = ""
```
You should mark the voice with `[main]` `[town]` `[country]` whenever you want to change voice, refer to `src/f5_tts/infer/examples/multi/story.txt`.
## Speech Editing
To test speech editing capabilities, use the following command:
```bash
python src/f5_tts/infer/speech_edit.py
```
## Socket Realtime Client
To communicate with socket server you need to run
```bash
python src/f5_tts/socket_server.py
```
<details>
<summary>Then create client to communicate</summary>
```bash
# If PyAudio not installed
sudo apt-get install portaudio19-dev
pip install pyaudio
```
``` python
# Create the socket_client.py
import socket
import asyncio
import pyaudio
import numpy as np
import logging
import time
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
async def listen_to_F5TTS(text, server_ip="localhost", server_port=9998):
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
await asyncio.get_event_loop().run_in_executor(None, client_socket.connect, (server_ip, int(server_port)))
start_time = time.time()
first_chunk_time = None
async def play_audio_stream():
nonlocal first_chunk_time
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paFloat32, channels=1, rate=24000, output=True, frames_per_buffer=2048)
try:
while True:
data = await asyncio.get_event_loop().run_in_executor(None, client_socket.recv, 8192)
if not data:
break
if data == b"END":
logger.info("End of audio received.")
break
audio_array = np.frombuffer(data, dtype=np.float32)
stream.write(audio_array.tobytes())
if first_chunk_time is None:
first_chunk_time = time.time()
finally:
stream.stop_stream()
stream.close()
p.terminate()
logger.info(f"Total time taken: {time.time() - start_time:.4f} seconds")
try:
data_to_send = f"{text}".encode("utf-8")
await asyncio.get_event_loop().run_in_executor(None, client_socket.sendall, data_to_send)
await play_audio_stream()
except Exception as e:
logger.error(f"Error in listen_to_F5TTS: {e}")
finally:
client_socket.close()
if __name__ == "__main__":
text_to_send = "As a Reader assistant, I'm familiar with new technology. which are key to its improved performance in terms of both training speed and inference efficiency. Let's break down the components"
asyncio.run(listen_to_F5TTS(text_to_send))
```
</details> | {
"source": "SWivid/F5-TTS",
"title": "src/f5_tts/infer/README.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/infer/README.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 7906
} |
<!-- omit in toc -->
# Shared Model Cards
<!-- omit in toc -->
### **Prerequisites of using**
- This document is serving as a quick lookup table for the community training/finetuning result, with various language support.
- The models in this repository are open source and are based on voluntary contributions from contributors.
- The use of models must be conditioned on respect for the respective creators. The convenience brought comes from their efforts.
<!-- omit in toc -->
### **Welcome to share here**
- Have a pretrained/finetuned result: model checkpoint (pruned best to facilitate inference, i.e. leave only `ema_model_state_dict`) and corresponding vocab file (for tokenization).
- Host a public [huggingface model repository](https://huggingface.co/new) and upload the model related files.
- Make a pull request adding a model card to the current page, i.e. `src\f5_tts\infer\SHARED.md`.
<!-- omit in toc -->
### Supported Languages
- [Multilingual](#multilingual)
- [F5-TTS Base @ zh \& en @ F5-TTS](#f5-tts-base--zh--en--f5-tts)
- [English](#english)
- [Finnish](#finnish)
- [F5-TTS Base @ fi @ AsmoKoskinen](#f5-tts-base--fi--asmokoskinen)
- [French](#french)
- [F5-TTS Base @ fr @ RASPIAUDIO](#f5-tts-base--fr--raspiaudio)
- [Hindi](#hindi)
- [F5-TTS Small @ hi @ SPRINGLab](#f5-tts-small--hi--springlab)
- [Italian](#italian)
- [F5-TTS Base @ it @ alien79](#f5-tts-base--it--alien79)
- [Japanese](#japanese)
- [F5-TTS Base @ ja @ Jmica](#f5-tts-base--ja--jmica)
- [Mandarin](#mandarin)
- [Russian](#russian)
- [F5-TTS Base @ ru @ HotDro4illa](#f5-tts-base--ru--hotdro4illa)
- [Spanish](#spanish)
- [F5-TTS Base @ es @ jpgallegoar](#f5-tts-base--es--jpgallegoar)
## Multilingual
#### F5-TTS Base @ zh & en @ F5-TTS
|Model|🤗Hugging Face|Data (Hours)|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/SWivid/F5-TTS/tree/main/F5TTS_Base)|[Emilia 95K zh&en](https://huggingface.co/datasets/amphion/Emilia-Dataset/tree/fc71e07)|cc-by-nc-4.0|
```bash
Model: hf://SWivid/F5-TTS/F5TTS_Base/model_1200000.safetensors
Vocab: hf://SWivid/F5-TTS/F5TTS_Base/vocab.txt
Config: {"dim": 1024, "depth": 22, "heads": 16, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
*Other infos, e.g. Author info, Github repo, Link to some sampled results, Usage instruction, Tutorial (Blog, Video, etc.) ...*
## English
## Finnish
#### F5-TTS Base @ fi @ AsmoKoskinen
|Model|🤗Hugging Face|Data|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/AsmoKoskinen/F5-TTS_Finnish_Model)|[Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0), [Vox Populi](https://huggingface.co/datasets/facebook/voxpopuli)|cc-by-nc-4.0|
```bash
Model: hf://AsmoKoskinen/F5-TTS_Finnish_Model/model_common_voice_fi_vox_populi_fi_20241206.safetensors
Vocab: hf://AsmoKoskinen/F5-TTS_Finnish_Model/vocab.txt
Config: {"dim": 1024, "depth": 22, "heads": 16, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
## French
#### F5-TTS Base @ fr @ RASPIAUDIO
|Model|🤗Hugging Face|Data (Hours)|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/RASPIAUDIO/F5-French-MixedSpeakers-reduced)|[LibriVox](https://librivox.org/)|cc-by-nc-4.0|
```bash
Model: hf://RASPIAUDIO/F5-French-MixedSpeakers-reduced/model_last_reduced.pt
Vocab: hf://RASPIAUDIO/F5-French-MixedSpeakers-reduced/vocab.txt
Config: {"dim": 1024, "depth": 22, "heads": 16, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
- [Online Inference with Hugging Face Space](https://huggingface.co/spaces/RASPIAUDIO/f5-tts_french).
- [Tutorial video to train a new language model](https://www.youtube.com/watch?v=UO4usaOojys).
- [Discussion about this training can be found here](https://github.com/SWivid/F5-TTS/issues/434).
## Hindi
#### F5-TTS Small @ hi @ SPRINGLab
|Model|🤗Hugging Face|Data (Hours)|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Small|[ckpt & vocab](https://huggingface.co/SPRINGLab/F5-Hindi-24KHz)|[IndicTTS Hi](https://huggingface.co/datasets/SPRINGLab/IndicTTS-Hindi) & [IndicVoices-R Hi](https://huggingface.co/datasets/SPRINGLab/IndicVoices-R_Hindi) |cc-by-4.0|
```bash
Model: hf://SPRINGLab/F5-Hindi-24KHz/model_2500000.safetensors
Vocab: hf://SPRINGLab/F5-Hindi-24KHz/vocab.txt
Config: {"dim": 768, "depth": 18, "heads": 12, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
- Authors: SPRING Lab, Indian Institute of Technology, Madras
- Website: https://asr.iitm.ac.in/
## Italian
#### F5-TTS Base @ it @ alien79
|Model|🤗Hugging Face|Data|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/alien79/F5-TTS-italian)|[ylacombe/cml-tts](https://huggingface.co/datasets/ylacombe/cml-tts) |cc-by-nc-4.0|
```bash
Model: hf://alien79/F5-TTS-italian/model_159600.safetensors
Vocab: hf://alien79/F5-TTS-italian/vocab.txt
Config: {"dim": 1024, "depth": 22, "heads": 16, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
- Trained by [Mithril Man](https://github.com/MithrilMan)
- Model details on [hf project home](https://huggingface.co/alien79/F5-TTS-italian)
- Open to collaborations to further improve the model
## Japanese
#### F5-TTS Base @ ja @ Jmica
|Model|🤗Hugging Face|Data (Hours)|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/Jmica/F5TTS/tree/main/JA_25498980)|[Emilia 1.7k JA](https://huggingface.co/datasets/amphion/Emilia-Dataset/tree/fc71e07) & [Galgame Dataset 5.4k](https://huggingface.co/datasets/OOPPEENN/Galgame_Dataset)|cc-by-nc-4.0|
```bash
Model: hf://Jmica/F5TTS/JA_25498980/model_25498980.pt
Vocab: hf://Jmica/F5TTS/JA_25498980/vocab_updated.txt
Config: {"dim": 1024, "depth": 22, "heads": 16, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
## Mandarin
## Russian
#### F5-TTS Base @ ru @ HotDro4illa
|Model|🤗Hugging Face|Data (Hours)|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/hotstone228/F5-TTS-Russian)|[Common voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0)|cc-by-nc-4.0|
```bash
Model: hf://hotstone228/F5-TTS-Russian/model_last.safetensors
Vocab: hf://hotstone228/F5-TTS-Russian/vocab.txt
Config: {"dim": 1024, "depth": 22, "heads": 16, "ff_mult": 2, "text_dim": 512, "conv_layers": 4}
```
- Finetuned by [HotDro4illa](https://github.com/HotDro4illa)
- Any improvements are welcome
## Spanish
#### F5-TTS Base @ es @ jpgallegoar
|Model|🤗Hugging Face|Data (Hours)|Model License|
|:---:|:------------:|:-----------:|:-------------:|
|F5-TTS Base|[ckpt & vocab](https://huggingface.co/jpgallegoar/F5-Spanish)|[Voxpopuli](https://huggingface.co/datasets/facebook/voxpopuli) & Crowdsourced & TEDx, 218 hours|cc0-1.0|
- @jpgallegoar [GitHub repo](https://github.com/jpgallegoar/Spanish-F5), Jupyter Notebook and Gradio usage for Spanish model. | {
"source": "SWivid/F5-TTS",
"title": "src/f5_tts/infer/SHARED.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/infer/SHARED.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 7104
} |
# Training
## Prepare Dataset
Example data processing scripts, and you may tailor your own one along with a Dataset class in `src/f5_tts/model/dataset.py`.
### 1. Some specific Datasets preparing scripts
Download corresponding dataset first, and fill in the path in scripts.
```bash
# Prepare the Emilia dataset
python src/f5_tts/train/datasets/prepare_emilia.py
# Prepare the Wenetspeech4TTS dataset
python src/f5_tts/train/datasets/prepare_wenetspeech4tts.py
# Prepare the LibriTTS dataset
python src/f5_tts/train/datasets/prepare_libritts.py
# Prepare the LJSpeech dataset
python src/f5_tts/train/datasets/prepare_ljspeech.py
```
### 2. Create custom dataset with metadata.csv
Use guidance see [#57 here](https://github.com/SWivid/F5-TTS/discussions/57#discussioncomment-10959029).
```bash
python src/f5_tts/train/datasets/prepare_csv_wavs.py
```
## Training & Finetuning
Once your datasets are prepared, you can start the training process.
### 1. Training script used for pretrained model
```bash
# setup accelerate config, e.g. use multi-gpu ddp, fp16
# will be to: ~/.cache/huggingface/accelerate/default_config.yaml
accelerate config
# .yaml files are under src/f5_tts/configs directory
accelerate launch src/f5_tts/train/train.py --config-name F5TTS_Base_train.yaml
# possible to overwrite accelerate and hydra config
accelerate launch --mixed_precision=fp16 src/f5_tts/train/train.py --config-name F5TTS_Small_train.yaml ++datasets.batch_size_per_gpu=19200
```
### 2. Finetuning practice
Discussion board for Finetuning [#57](https://github.com/SWivid/F5-TTS/discussions/57).
Gradio UI training/finetuning with `src/f5_tts/train/finetune_gradio.py` see [#143](https://github.com/SWivid/F5-TTS/discussions/143).
The `use_ema = True` is harmful for early-stage finetuned checkpoints (which goes just few updates, thus ema weights still dominated by pretrained ones), try turn it off and see if provide better results.
### 3. Wandb Logging
The `wandb/` dir will be created under path you run training/finetuning scripts.
By default, the training script does NOT use logging (assuming you didn't manually log in using `wandb login`).
To turn on wandb logging, you can either:
1. Manually login with `wandb login`: Learn more [here](https://docs.wandb.ai/ref/cli/wandb-login)
2. Automatically login programmatically by setting an environment variable: Get an API KEY at https://wandb.ai/site/ and set the environment variable as follows:
On Mac & Linux:
```
export WANDB_API_KEY=<YOUR WANDB API KEY>
```
On Windows:
```
set WANDB_API_KEY=<YOUR WANDB API KEY>
```
Moreover, if you couldn't access Wandb and want to log metrics offline, you can the environment variable as follows:
```
export WANDB_MODE=offline
``` | {
"source": "SWivid/F5-TTS",
"title": "src/f5_tts/train/README.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/train/README.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 2750
} |
## Backbones quick introduction
### unett.py
- flat unet transformer
- structure same as in e2-tts & voicebox paper except using rotary pos emb
- update: allow possible abs pos emb & convnextv2 blocks for embedded text before concat
### dit.py
- adaln-zero dit
- embedded timestep as condition
- concatted noised_input + masked_cond + embedded_text, linear proj in
- possible abs pos emb & convnextv2 blocks for embedded text before concat
- possible long skip connection (first layer to last layer)
### mmdit.py
- sd3 structure
- timestep as condition
- left stream: text embedded and applied a abs pos emb
- right stream: masked_cond & noised_input concatted and with same conv pos emb as unett | {
"source": "SWivid/F5-TTS",
"title": "src/f5_tts/model/backbones/README.md",
"url": "https://github.com/SWivid/F5-TTS/blob/main/src/f5_tts/model/backbones/README.md",
"date": "2024-10-08T13:36:55",
"stars": 9872,
"description": "Official code for \"F5-TTS: A Fairytaler that Fakes Fluent and Faithful Speech with Flow Matching\"",
"file_size": 700
} |
# FlashMLA
FlashMLA is an efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences serving.
Currently released:
- BF16, FP16
- Paged kvcache with block size of 64
## Quick start
### Install
```bash
python setup.py install
```
### Benchmark
```bash
python tests/test_flash_mla.py
```
Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.8.
### Usage
```python
from flash_mla import get_mla_metadata, flash_mla_with_kvcache
tile_scheduler_metadata, num_splits = get_mla_metadata(cache_seqlens, s_q * h_q // h_kv, h_kv)
for i in range(num_layers):
...
o_i, lse_i = flash_mla_with_kvcache(
q_i, kvcache_i, block_table, cache_seqlens, dv,
tile_scheduler_metadata, num_splits, causal=True,
)
...
```
## Requirements
- Hopper GPUs
- CUDA 12.3 and above
- **But we highly recommend 12.8 or above for the best performance**
- PyTorch 2.0 and above
## Acknowledgement
FlashMLA is inspired by [FlashAttention 2&3](https://github.com/dao-AILab/flash-attention/) and [cutlass](https://github.com/nvidia/cutlass) projects.
## Citation
```bibtex
@misc{flashmla2025,
title={FlashMLA: Efficient MLA decoding kernels},
author={Jiashi Li},
year={2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/deepseek-ai/FlashMLA}},
}
``` | {
"source": "deepseek-ai/FlashMLA",
"title": "README.md",
"url": "https://github.com/deepseek-ai/FlashMLA/blob/main/README.md",
"date": "2025-02-21T06:31:27",
"stars": 9433,
"description": null,
"file_size": 1420
} |
# AI Hedge Fund
This is a proof of concept for an AI-powered hedge fund. The goal of this project is to explore the use of AI to make trading decisions. This project is for **educational** purposes only and is not intended for real trading or investment.
This system employs several agents working together:
1. Ben Graham Agent - The godfather of value investing, only buys hidden gems with a margin of safety
2. Bill Ackman Agent - An activist investors, takes bold positions and pushes for change
3. Catherine Wood Agent - The queen of growth investing, believes in the power of innovation and disruption
4. Warren Buffett Agent - The oracle of Omaha, seeks wonderful companies at a fair price
5. Valuation Agent - Calculates the intrinsic value of a stock and generates trading signals
6. Sentiment Agent - Analyzes market sentiment and generates trading signals
7. Fundamentals Agent - Analyzes fundamental data and generates trading signals
8. Technicals Agent - Analyzes technical indicators and generates trading signals
9. Risk Manager - Calculates risk metrics and sets position limits
10. Portfolio Manager - Makes final trading decisions and generates orders
<img width="1117" alt="Screenshot 2025-02-09 at 11 26 14 AM" src="https://github.com/user-attachments/assets/16509cc2-4b64-4c67-8de6-00d224893d58" />
**Note**: the system simulates trading decisions, it does not actually trade.
[](https://twitter.com/virattt)
## Disclaimer
This project is for **educational and research purposes only**.
- Not intended for real trading or investment
- No warranties or guarantees provided
- Past performance does not indicate future results
- Creator assumes no liability for financial losses
- Consult a financial advisor for investment decisions
By using this software, you agree to use it solely for learning purposes.
## Table of Contents
- [Setup](#setup)
- [Usage](#usage)
- [Running the Hedge Fund](#running-the-hedge-fund)
- [Running the Backtester](#running-the-backtester)
- [Project Structure](#project-structure)
- [Contributing](#contributing)
- [Feature Requests](#feature-requests)
- [License](#license)
## Setup
Clone the repository:
```bash
git clone https://github.com/virattt/ai-hedge-fund.git
cd ai-hedge-fund
```
1. Install Poetry (if not already installed):
```bash
curl -sSL https://install.python-poetry.org | python3 -
```
2. Install dependencies:
```bash
poetry install
```
3. Set up your environment variables:
```bash
# Create .env file for your API keys
cp .env.example .env
```
4. Set your API keys:
```bash
# For running LLMs hosted by openai (gpt-4o, gpt-4o-mini, etc.)
# Get your OpenAI API key from https://platform.openai.com/
OPENAI_API_KEY=your-openai-api-key
# For running LLMs hosted by groq (deepseek, llama3, etc.)
# Get your Groq API key from https://groq.com/
GROQ_API_KEY=your-groq-api-key
# For getting financial data to power the hedge fund
# Get your Financial Datasets API key from https://financialdatasets.ai/
FINANCIAL_DATASETS_API_KEY=your-financial-datasets-api-key
```
**Important**: You must set `OPENAI_API_KEY`, `GROQ_API_KEY`, or `ANTHROPIC_API_KEY` for the hedge fund to work. If you want to use LLMs from all providers, you will need to set all API keys.
Financial data for AAPL, GOOGL, MSFT, NVDA, and TSLA is free and does not require an API key.
For any other ticker, you will need to set the `FINANCIAL_DATASETS_API_KEY` in the .env file.
## Usage
### Running the Hedge Fund
```bash
poetry run python src/main.py --ticker AAPL,MSFT,NVDA
```
**Example Output:**
<img width="992" alt="Screenshot 2025-01-06 at 5 50 17 PM" src="https://github.com/user-attachments/assets/e8ca04bf-9989-4a7d-a8b4-34e04666663b" />
You can also specify a `--show-reasoning` flag to print the reasoning of each agent to the console.
```bash
poetry run python src/main.py --ticker AAPL,MSFT,NVDA --show-reasoning
```
You can optionally specify the start and end dates to make decisions for a specific time period.
```bash
poetry run python src/main.py --ticker AAPL,MSFT,NVDA --start-date 2024-01-01 --end-date 2024-03-01
```
### Running the Backtester
```bash
poetry run python src/backtester.py --ticker AAPL,MSFT,NVDA
```
**Example Output:**
<img width="941" alt="Screenshot 2025-01-06 at 5 47 52 PM" src="https://github.com/user-attachments/assets/00e794ea-8628-44e6-9a84-8f8a31ad3b47" />
You can optionally specify the start and end dates to backtest over a specific time period.
```bash
poetry run python src/backtester.py --ticker AAPL,MSFT,NVDA --start-date 2024-01-01 --end-date 2024-03-01
```
## Project Structure
```
ai-hedge-fund/
├── src/
│ ├── agents/ # Agent definitions and workflow
│ │ ├── bill_ackman.py # Bill Ackman agent
│ │ ├── fundamentals.py # Fundamental analysis agent
│ │ ├── portfolio_manager.py # Portfolio management agent
│ │ ├── risk_manager.py # Risk management agent
│ │ ├── sentiment.py # Sentiment analysis agent
│ │ ├── technicals.py # Technical analysis agent
│ │ ├── valuation.py # Valuation analysis agent
│ │ ├── warren_buffett.py # Warren Buffett agent
│ ├── tools/ # Agent tools
│ │ ├── api.py # API tools
│ ├── backtester.py # Backtesting tools
│ ├── main.py # Main entry point
├── pyproject.toml
├── ...
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Commit your changes
4. Push to the branch
5. Create a Pull Request
**Important**: Please keep your pull requests small and focused. This will make it easier to review and merge.
## Feature Requests
If you have a feature request, please open an [issue](https://github.com/virattt/ai-hedge-fund/issues) and make sure it is tagged with `enhancement`.
## License
This project is licensed under the MIT License - see the LICENSE file for details. | {
"source": "virattt/ai-hedge-fund",
"title": "README.md",
"url": "https://github.com/virattt/ai-hedge-fund/blob/main/README.md",
"date": "2024-11-29T16:30:01",
"stars": 8829,
"description": "An AI Hedge Fund Team",
"file_size": 5993
} |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**Screenshot**
Add a screenshot of the bug to help explain your problem.
**Additional context**
Add any other context about the problem here. | {
"source": "virattt/ai-hedge-fund",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/virattt/ai-hedge-fund/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-11-29T16:30:01",
"stars": 8829,
"description": "An AI Hedge Fund Team",
"file_size": 321
} |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Describe the feature you'd like**
A clear and concise description of what you want to happen. | {
"source": "virattt/ai-hedge-fund",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/virattt/ai-hedge-fund/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-11-29T16:30:01",
"stars": 8829,
"description": "An AI Hedge Fund Team",
"file_size": 211
} |
## Compilation
Building the project is for users that want to contribute code only.
If you wish to build the emulator yourself, follow these steps:
### Step 1
Install the [.NET 9.0 (or higher) SDK](https://dotnet.microsoft.com/download/dotnet/9.0).
Make sure your SDK version is higher or equal to the required version specified in [global.json](global.json).
### Step 2
Either use `git clone https://github.com/Ryubing/Ryujinx` on the command line to clone the repository or use Code --> Download zip button to get the files.
### Step 3
To build Ryujinx, open a command prompt inside the project directory.
You can quickly access it on Windows by holding shift in File Explorer, then right clicking and selecting `Open command window here`.
Then type the following command: `dotnet build -c Release -o build`
the built files will be found in the newly created build directory.
Ryujinx system files are stored in the `Ryujinx` folder.
This folder is located in the user folder, which can be accessed by clicking `Open Ryujinx Folder` under the File menu in the GUI. | {
"source": "Ryubing/Ryujinx",
"title": "COMPILING.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/COMPILING.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 1073
} |
# Contribution to Ryujinx
You can contribute to Ryujinx with PRs, testing of PRs and issues. Contributing code and other implementations is greatly appreciated alongside simply filing issues for problems you encounter.
Please read the entire document before continuing as it can potentially save everyone involved a significant amount of time.
# Quick Links
* [Code Style Documentation](docs/coding-guidelines/coding-style.md)
* [Pull Request Guidelines](docs/workflow/pr-guide.md)
## Reporting Issues
We always welcome bug reports, feature proposals and overall feedback. Here are a few tips on how you can make reporting your issue as effective as possible.
### Finding Existing Issues
Before filing a new issue, please search our [open issues](https://github.com/Ryubing/Ryujinx/issues) to check if it already exists.
If you do find an existing issue, please include your own feedback in the discussion. Do consider upvoting (👍 reaction) the original post, as this helps us prioritize popular issues in our backlog.
### Writing a Good Feature Request
Please review any feature requests already opened to both check it has not already been suggested, and to familiarize yourself with the format. When ready to submit a proposal, please use the [Feature Request issue template](https://github.com/Ryubing/Ryujinx/issues/new?assignees=&labels=&projects=&template=feature_request.yml&title=%5BFeature+Request%5D).
### Writing a Good Bug Report
Good bug reports make it easier for maintainers to verify and root cause the underlying problem. The better a bug report, the faster the problem will be resolved.
Ideally, a bug report should contain the following information:
* A high-level description of the problem.
* A _minimal reproduction_, i.e. the smallest time commitment/configuration required to reproduce the wrong behavior. This can be in the form of a small homebrew application, or by providing a save file and reproduction steps for a specific game.
* A description of the _expected behavior_, contrasted with the _actual behavior_ observed.
* Information on the environment: OS/distro, CPU, GPU (including driver), RAM etc.
* A Ryujinx log file of the run instance where the issue occurred. Log files can be found in `[Executable Folder]/Logs` and are named chronologically.
* Additional information, e.g. is it a regression from previous versions? Are there any known workarounds?
When ready to submit a bug report, please use the [Bug Report issue template](https://github.com/Ryubing/Ryujinx/issues/new?assignees=&labels=bug&projects=&template=bug_report.yml&title=%5BBug%5D).
## Contributing Changes
Project maintainers will merge changes that both improve the project and meet our standards for code quality.
The [Pull Request Guide](docs/workflow/pr-guide.md) and [License](https://github.com/Ryubing/Ryujinx/blob/master/LICENSE.txt) docs define additional guidance.
### DOs and DON'Ts
Please do:
* **DO** follow our [coding style](docs/coding-guidelines/coding-style.md) (C# code-specific).
* **DO** give priority to the current style of the project or file you're changing even if it diverges from the general guidelines.
* **DO** keep the discussions focused. When a new or related topic comes up
it's often better to create new issue than to side track the discussion.
* **DO** clearly state on an issue that you are going to take on implementing it.
* **DO** blog and tweet (or whatever) about your contributions, frequently!
Please do not:
* **DON'T** make PRs for style changes.
* **DON'T** surprise us with big pull requests. Instead, file an issue and talk with us on Discord to start
a discussion so we can agree on a direction before you invest a large amount
of time.
* **DON'T** commit code that you didn't write. If you find code that you think is a good fit to add to Ryujinx, file an issue or talk to us on Discord to start a discussion before proceeding.
* **DON'T** submit PRs that alter licensing related files or headers. If you believe there's a problem with them, file an issue and we'll be happy to discuss it.
### Suggested Workflow
We use and recommend the following workflow:
1. Create or find an issue for your work.
- You can skip this step for trivial changes.
- Get agreement from the team and the community that your proposed change is a good one if it is of significant size or changes core functionality.
- Clearly state that you are going to take on implementing it, if that's the case. You can request that the issue be assigned to you. Note: The issue filer and the implementer don't have to be the same person.
2. Create a personal fork of the repository on GitHub (if you don't already have one).
3. In your fork, create a branch off of main (`git checkout -b mybranch`).
- Branches are useful since they isolate your changes from incoming changes from upstream. They also enable you to create multiple PRs from the same fork.
4. Make and commit your changes to your branch.
- [Build Instructions](https://github.com/Ryubing/Ryujinx/blob/master/COMPILING.md) explains how to build and test.
- Commit messages should be clear statements of action and intent.
6. Build the repository with your changes.
- Make sure that the builds are clean.
- Make sure that `dotnet format` has been run and any corrections tested and committed.
7. Create a pull request (PR) against the Ryujinx/Ryujinx repository's **main** branch.
- State in the description what issue or improvement your change is addressing.
- Check if all the Continuous Integration checks are passing. Refer to [Actions](https://github.com/Ryubing/Ryujinx/actions) to check for outstanding errors.
8. Wait for feedback or approval of your changes from the core development team
- Details about the pull request [review procedure](docs/workflow/pr-guide.md).
9. When the team members have signed off, and all checks are green, your PR will be merged.
- The next official build will automatically include your change.
- You can delete the branch you used for making the change.
### Good First Issues
The team marks the most straightforward issues as [good first issues](https://github.com/Ryubing/Ryujinx/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). This set of issues is the place to start if you are interested in contributing but new to the codebase.
### Commit Messages
Please format commit messages as follows (based on [A Note About Git Commit Messages](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)):
```
Summarize change in 50 characters or less
Provide more detail after the first line. Leave one blank line below the
summary and wrap all lines at 72 characters or less.
If the change fixes an issue, leave another blank line after the final
paragraph and indicate which issue is fixed in the specific format
below.
Fix #42
```
Also do your best to factor commits appropriately, not too large with unrelated things in the same commit, and not too small with the same small change applied N times in N different commits.
### PR - CI Process
The [Ryujinx continuous integration](https://github.com/Ryubing/Ryujinx/actions) (CI) system will automatically perform the required builds and run tests (including the ones you are expected to run) for PRs. Builds and test runs must be clean or have bugs properly filed against flaky/unexpected failures that are unrelated to your change.
If the CI build fails for any reason, the PR actions tab should be consulted for further information on the failure. There are a few usual suspects for such a failure:
* `dotnet format` has not been run on the PR and has outstanding stylistic issues.
* There is an error within the PR that fails a test or errors the compiler.
* Random failure of the workflow can occasionally result in a CI failure. In this scenario a maintainer will manually restart the job.
### PR Feedback
Ryujinx team and community members will provide feedback on your change. Community feedback is highly valued. You may see the absence of team feedback if the community has already provided good review feedback.
Two Ryujinx team members must review and approve every PR prior to merge. They will often reply with "LGTM, see nit". That means that the PR will be merged once the feedback is resolved. "LGTM" == "looks good to me".
There are lots of thoughts and [approaches](https://github.com/antlr/antlr4-cpp/blob/master/CONTRIBUTING.md#emoji) for how to efficiently discuss changes. It is best to be clear and explicit with your feedback. Please be patient with people who might not understand the finer details about your approach to feedback.
#### Copying Changes from Other Projects
Ryujinx uses some implementations and frameworks from other projects. The following rules must be followed for PRs that include changes from another project:
- The license of the file is [permissive](https://en.wikipedia.org/wiki/Permissive_free_software_licence).
- The license of the file is left in-tact.
- The contribution is correctly attributed in the [3rd party notices](https://github.com/Ryubing/Ryujinx/blob/master/distribution/legal/THIRDPARTY.md) file in the repository, as needed. | {
"source": "Ryubing/Ryujinx",
"title": "CONTRIBUTING.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/CONTRIBUTING.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 9200
} |
<table align="center">
<tr>
<td align="center" width="25%">
<img src="https://raw.githubusercontent.com/GreemDev/ryuassets/refs/heads/main/RyujinxApp_1024.png" alt="Ryujinx" >
</td>
<td align="center" width="75%">
# Ryujinx
[](https://github.com/Ryubing/Ryujinx/actions/workflows/release.yml)
[](https://github.com/Ryubing/Ryujinx/releases/latest)
<br>
[](https://github.com/Ryubing/Ryujinx/actions/workflows/canary.yml)
[](https://github.com/Ryubing/Canary-Releases/releases/latest)
</td>
</tr>
</table>
<p align="center">
Ryujinx is an open-source Nintendo Switch emulator, originally created by gdkchan, written in C#.
This emulator aims at providing excellent accuracy and performance, a user-friendly interface and consistent builds.
It was written from scratch and development on the project began in September 2017.
Ryujinx is available on GitHub under the <a href="https://github.com/Ryubing/Ryujinx/blob/master/LICENSE.txt" target="_blank">MIT license</a>.
<br />
</p>
<p align="center">
On October 1st 2024, Ryujinx was discontinued as the creator was forced to abandon the project.
<br>
This fork is intended to be a QoL uplift for existing Ryujinx users.
<br>
This is not a Ryujinx revival project. This is not a Phoenix project.
<br>
Guides and documentation can be found on the <a href="https://github.com/Ryubing/Ryujinx/wiki">Wiki tab</a>.
</p>
<p align="center">
If you would like a more preservative fork of Ryujinx, check out <a href="https://github.com/ryujinx-mirror/ryujinx">ryujinx-mirror</a>.
</p>
<p align="center">
Click below to join the Discord:
<br>
<a href="https://discord.gg/PEuzjrFXUA">
<img src="https://img.shields.io/discord/1294443224030511104?color=5865F2&label=Ryubing&logo=discord&logoColor=white" alt="Discord">
</a>
<br>
<br>
<img src="https://raw.githubusercontent.com/Ryubing/Ryujinx/refs/heads/master/docs/shell.png">
</p>
## Usage
To run this emulator, your PC must be equipped with at least 8GiB of RAM;
failing to meet this requirement may result in a poor gameplay experience or unexpected crashes.
## Latest build
Stable builds are made every so often, based on the `master` branch, that then gets put into the releases you know and love.
These stable builds exist so that the end user can get a more **enjoyable and stable experience**.
They are released every month or so, to ensure consistent updates, while not being an annoying amount of individual updates to download over the course of that month.
You can find the latest stable release [here](https://github.com/Ryubing/Ryujinx/releases/latest).
Canary builds are compiled automatically for each commit on the `master` branch.
While we strive to ensure optimal stability and performance prior to pushing an update, these builds **may be unstable or completely broken**.
These canary builds are only recommended for experienced users.
You can find the latest canary release [here](https://github.com/Ryubing/Canary-Releases/releases/latest).
## Documentation
If you are planning to contribute or just want to learn more about this project please read through our [documentation](docs/README.md).
## Features
- **Audio**
Audio output is entirely supported, audio input (microphone) isn't supported.
We use C# wrappers for [OpenAL](https://openal-soft.org/), and [SDL2](https://www.libsdl.org/) & [libsoundio](http://libsound.io/) as fallbacks.
- **CPU**
The CPU emulator, ARMeilleure, emulates an ARMv8 CPU and currently has support for most 64-bit ARMv8 and some of the ARMv7 (and older) instructions, including partial 32-bit support.
It translates the ARM code to a custom IR, performs a few optimizations, and turns that into x86 code.
There are three memory manager options available depending on the user's preference, leveraging both software-based (slower) and host-mapped modes (much faster).
The fastest option (host, unchecked) is set by default.
Ryujinx also features an optional Profiled Persistent Translation Cache, which essentially caches translated functions so that they do not need to be translated every time the game loads.
The net result is a significant reduction in load times (the amount of time between launching a game and arriving at the title screen) for nearly every game.
NOTE: This feature is enabled by default in the Options menu > System tab.
You must launch the game at least twice to the title screen or beyond before performance improvements are unlocked on the third launch!
These improvements are permanent and do not require any extra launches going forward.
- **GPU**
The GPU emulator emulates the Switch's Maxwell GPU using either the OpenGL (version 4.5 minimum), Vulkan, or Metal (via MoltenVK) APIs through a custom build of OpenTK or Silk.NET respectively.
There are currently six graphics enhancements available to the end user in Ryujinx: Disk Shader Caching, Resolution Scaling, Anti-Aliasing, Scaling Filters (including FSR), Anisotropic Filtering and Aspect Ratio Adjustment.
These enhancements can be adjusted or toggled as desired in the GUI.
- **Input**
We currently have support for keyboard, mouse, touch input, JoyCon input support, and nearly all controllers.
Motion controls are natively supported in most cases; for dual-JoyCon motion support, DS4Windows or BetterJoy are currently required.
In all scenarios, you can set up everything inside the input configuration menu.
- **DLC & Modifications**
Ryujinx is able to manage add-on content/downloadable content through the GUI.
Mods (romfs, exefs, and runtime mods such as cheats) are also supported;
the GUI contains a shortcut to open the respective mods folder for a particular game.
- **Configuration**
The emulator has settings for enabling or disabling some logging, remapping controllers, and more.
You can configure all of them through the graphical interface or manually through the config file, `Config.json`, found in the Ryujinx data folder which can be accessed by clicking `Open Ryujinx Folder` under the File menu in the GUI.
## License
This software is licensed under the terms of the [MIT license](LICENSE.txt).
This project makes use of code authored by the libvpx project, licensed under BSD and the ffmpeg project, licensed under LGPLv3.
See [LICENSE.txt](LICENSE.txt) and [THIRDPARTY.md](distribution/legal/THIRDPARTY.md) for more details.
## Credits
- [LibHac](https://github.com/Thealexbarney/LibHac) is used for our file-system.
- [AmiiboAPI](https://www.amiiboapi.com) is used in our Amiibo emulation.
- [ldn_mitm](https://github.com/spacemeowx2/ldn_mitm) is used for one of our available multiplayer modes.
- [ShellLink](https://github.com/securifybv/ShellLink) is used for Windows shortcut generation. | {
"source": "Ryubing/Ryujinx",
"title": "README.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/README.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 7221
} |
# Documents Index
This repo includes several documents that explain both high-level and low-level concepts about Ryujinx and its functions. These are very useful for contributors, to get context that can be very difficult to acquire from just reading code.
Intro to Ryujinx
==================
Ryujinx is an open-source Nintendo Switch emulator, created by gdkchan, written in C#.
* The CPU emulator, ARMeilleure, emulates an ARMv8 CPU and currently has support for most 64-bit ARMv8 and some of the ARMv7 (and older) instructions.
* The GPU emulator emulates the Switch's Maxwell GPU using either the OpenGL (version 4.5 minimum), Vulkan, or Metal (via MoltenVK) APIs through a custom build of OpenTK or Silk.NET respectively.
* Audio output is entirely supported via C# wrappers for SDL2, with OpenAL & libsoundio as fallbacks.
Getting Started
===============
- [Installing the .NET SDK](https://dotnet.microsoft.com/download)
- [Official .NET Docs](https://docs.microsoft.com/dotnet/core/)
Contributing (Building, testing, benchmarking, profiling, etc.)
===============
If you want to contribute a code change to this repo, start here.
- [Contributor Guide](../CONTRIBUTING.md)
Coding Guidelines
=================
- [C# coding style](coding-guidelines/coding-style.md)
- [Service Implementation Guidelines - WIP](https://gist.github.com/gdkchan/84ba88cd50efbe58d1babfaa7cd7c455)
Project Docs
=================
To be added. Many project files will contain basic XML docs for key functions and classes in the meantime. | {
"source": "Ryubing/Ryujinx",
"title": "docs/README.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/docs/README.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 1531
} |
# ffmpeg (LGPLv3)
<details>
<summary>See License</summary>
```
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates
the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU
General Public License.
"The Library" refers to a covered work governed by this License,
other than an Application or a Combined Work as defined below.
An "Application" is any work that makes use of an interface provided
by the Library, but which is not otherwise based on the Library.
Defining a subclass of a class defined by the Library is deemed a mode
of using an interface provided by the Library.
A "Combined Work" is a work produced by combining or linking an
Application with the Library. The particular version of the Library
with which the Combined Work was made is also called the "Linked
Version".
The "Minimal Corresponding Source" for a Combined Work means the
Corresponding Source for the Combined Work, excluding any source code
for portions of the Combined Work that, considered in isolation, are
based on the Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the
object code and/or source code for the Application, including any data
and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License
without being bound by section 3 of the GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a
facility refers to a function or data to be supplied by an Application
that uses the facility (other than as an argument passed when the
facility is invoked), then you may convey a copy of the modified
version:
a) under this License, provided that you make a good faith effort to
ensure that, in the event an Application does not supply the
function or data, the facility still operates, and performs
whatever part of its purpose remains meaningful, or
b) under the GNU GPL, with none of the additional permissions of
this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from
a header file that is part of the Library. You may convey such object
code under terms of your choice, provided that, if the incorporated
material is not limited to numerical parameters, data structure
layouts and accessors, or small macros, inline functions and templates
(ten or fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the
Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license
document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that,
taken together, effectively do not restrict modification of the
portions of the Library contained in the Combined Work and reverse
engineering for debugging such modifications, if you also do each of
the following:
a) Give prominent notice with each copy of the Combined Work that
the Library is used in it and that the Library and its use are
covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license
document.
c) For a Combined Work that displays copyright notices during
execution, include the copyright notice for the Library among
these notices, as well as a reference directing the user to the
copies of the GNU GPL and this license document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this
License, and the Corresponding Application Code in a form
suitable for, and under terms that permit, the user to
recombine or relink the Application with a modified version of
the Linked Version to produce a modified Combined Work, in the
manner specified by section 6 of the GNU GPL for conveying
Corresponding Source.
1) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (a) uses at run time
a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked
Version.
e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the
GNU GPL, and only to the extent that such information is
necessary to install and execute a modified version of the
Combined Work produced by recombining or relinking the
Application with a modified version of the Linked Version. (If
you use option 4d0, the Installation Information must accompany
the Minimal Corresponding Source and Corresponding Application
Code. If you use option 4d1, you must provide the Installation
Information in the manner specified by section 6 of the GNU GPL
for conveying Corresponding Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the
Library side by side in a single library together with other library
facilities that are not Applications and are not covered by this
License, and convey such a combined library under terms of your
choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based
on the Library, uncombined with any other library facilities,
conveyed under the terms of this License.
b) Give prominent notice with the combined library that part of it
is a work based on the Library, and explaining where to find the
accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions
of the GNU Lesser General Public License from time to time. Such new
versions will be similar in spirit to the present version, but may
differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the
Library as you received it specifies that a certain numbered version
of the GNU Lesser General Public License "or any later version"
applies to it, you have the option of following the terms and
conditions either of that published version or of any later version
published by the Free Software Foundation. If the Library as you
received it does not specify a version number of the GNU Lesser
General Public License, you may choose any version of the GNU Lesser
General Public License ever published by the Free Software Foundation.
If the Library as you received it specifies that a proxy can decide
whether future versions of the GNU Lesser General Public License shall
apply, that proxy's public statement of acceptance of any version is
permanent authorization for you to choose that version for the
Library.
```
</details>
# libvpx (BSD)
<details>
<summary>See License</summary>
```
Copyright (c) 2010, The WebM Project authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* Neither the name of Google, nor the WebM Project, nor the names
of its contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
```
</details>
# Atmosphère (MIT)
<details>
<summary>See License</summary>
```
MIT License
Copyright (c) 2018-2020 Atmosphère-NX
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
</details>
# OpenAL Soft (LGPLv2)
<details>
<summary>See License</summary>
```
GNU LIBRARY GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the library GPL. It is
numbered 2 because it goes with version 2 of the ordinary GPL.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Library General Public License, applies to some
specially designated Free Software Foundation software, and to any
other libraries whose authors decide to use it. You can use it for
your libraries, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if
you distribute copies of the library, or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link a program with the library, you must provide
complete object files to the recipients so that they can relink them
with the library, after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
Our method of protecting your rights has two steps: (1) copyright
the library, and (2) offer you this license which gives you legal
permission to copy, distribute and/or modify the library.
Also, for each distributor's protection, we want to make certain
that everyone understands that there is no warranty for this free
library. If the library is modified by someone else and passed on, we
want its recipients to know that what they have is not the original
version, so that any problems introduced by others will not reflect on
the original authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that companies distributing free
software will individually obtain patent licenses, thus in effect
transforming the program into proprietary software. To prevent this,
we have made it clear that any patent must be licensed for everyone's
free use or not licensed at all.
Most GNU software, including some libraries, is covered by the ordinary
GNU General Public License, which was designed for utility programs. This
license, the GNU Library General Public License, applies to certain
designated libraries. This license is quite different from the ordinary
one; be sure to read it in full, and don't assume that anything in it is
the same as in the ordinary license.
The reason we have a separate public license for some libraries is that
they blur the distinction we usually make between modifying or adding to a
program and simply using it. Linking a program with a library, without
changing the library, is in some sense simply using the library, and is
analogous to running a utility program or application program. However, in
a textual and legal sense, the linked executable is a combined work, a
derivative of the original library, and the ordinary General Public License
treats it as such.
Because of this blurred distinction, using the ordinary General
Public License for libraries did not effectively promote software
sharing, because most developers did not use the libraries. We
concluded that weaker conditions might promote sharing better.
However, unrestricted linking of non-free programs would deprive the
users of those programs of all benefit from the free status of the
libraries themselves. This Library General Public License is intended to
permit developers of non-free programs to use free libraries, while
preserving your freedom as a user of such programs to change the free
libraries that are incorporated in them. (We have not seen how to achieve
this as regards changes in header files, but we have achieved it as regards
changes in the actual functions of the Library.) The hope is that this
will lead to faster development of free libraries.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, while the latter only
works together with the library.
Note that it is possible for a library to be covered by the ordinary
General Public License rather than by this special one.
GNU LIBRARY GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library which
contains a notice placed by the copyright holder or other authorized
party saying it may be distributed under the terms of this Library
General Public License (also called "this License"). Each licensee is
addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also compile or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
c) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
d) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the source code distributed need not include anything that is normally
distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Library General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
```
</details>
# ShellLink (MIT)
<details>
<summary>See License</summary>
```
MIT License
Copyright (c) 2017 Yorick Koster, Securify B.V.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
</details> | {
"source": "Ryubing/Ryujinx",
"title": "distribution/legal/THIRDPARTY.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/distribution/legal/THIRDPARTY.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 36222
} |
# C# Coding Style
The general rule we follow is "use Visual Studio defaults".
Using an IDE that supports the `.editorconfig` standard will make this much simpler.
1. We use [Allman style](http://en.wikipedia.org/wiki/Indent_style#Allman_style) braces, where each brace begins on a new line. A single line statement block can go without braces but the block must be properly indented on its own line and must not be nested in other statement blocks that use braces (See rule 18 for more details). One exception is that a `using` statement is permitted to be nested within another `using` statement by starting on the following line at the same indentation level, even if the nested `using` contains a controlled block.
2. We use four spaces of indentation (no tabs).
3. We use `_camelCase` for internal and private fields and use `readonly` where possible. Prefix internal and private instance fields with `_`, static fields with `s_` and thread static fields with `t_`. When used on static fields, `readonly` should come after `static` (e.g. `static readonly` not `readonly static`). Public fields should be used sparingly and should use PascalCasing with no prefix when used.
4. We avoid `this.` unless absolutely necessary.
5. We always specify the visibility, even if it's the default (e.g.
`private string _foo` not `string _foo`). Visibility should be the first modifier (e.g.
`public abstract` not `abstract public`).
6. Namespace imports should be specified at the top of the file, *outside* of `namespace` declarations.
7. Avoid more than one empty line at any time. For example, do not have two
blank lines between members of a type.
8. Avoid spurious free spaces.
For example avoid `if (someVar == 0)...`, where the dots mark the spurious free spaces.
Consider enabling "View White Space (Ctrl+R, Ctrl+W)" or "Edit -> Advanced -> View White Space" if using Visual Studio to aid detection.
9. If a file happens to differ in style from these guidelines (e.g. private members are named `m_member`
rather than `_member`), the existing style in that file takes precedence.
10. We only use `var` when the type is explicitly named on the right-hand side, typically due to either `new` or an explicit cast, e.g. `var stream = new FileStream(...)` not `var stream = OpenStandardInput()`.
- Similarly, target-typed `new()` can only be used when the type is explicitly named on the left-hand side, in a variable definition statement or a field definition statement. e.g. `FileStream stream = new(...);`, but not `stream = new(...);` (where the type was specified on a previous line).
11. We use language keywords instead of BCL types (e.g. `int, string, float` instead of `Int32, String, Single`, etc) for both type references as well as method calls (e.g. `int.Parse` instead of `Int32.Parse`). See issue [#13976](https://github.com/dotnet/runtime/issues/13976) for examples.
12. We use PascalCasing to name all our constant local variables and fields. The only exception is for interop code where the constant value should exactly match the name and value of the code you are calling via interop.
13. We use PascalCasing for all method names, including local functions.
14. We use ```nameof(...)``` instead of ```"..."``` whenever possible and relevant.
15. Fields should be specified at the top within type declarations.
16. When including non-ASCII characters in the source code use Unicode escape sequences (\uXXXX) instead of literal characters. Literal non-ASCII characters occasionally get garbled by a tool or editor.
17. When using labels (for goto), indent the label one less than the current indentation.
18. When using a single-statement if, we follow these conventions:
- Never use single-line form (for example: `if (source == null) throw new ArgumentNullException("source");`)
- Using braces is always accepted, and required if any block of an `if`/`else if`/.../`else` compound statement uses braces or if a single statement body spans multiple lines.
- Braces may be omitted only if the body of *every* block associated with an `if`/`else if`/.../`else` compound statement is placed on a single line.
19. Make all internal and private types static or sealed unless derivation from them is required. As with any implementation detail, they can be changed if/when derivation is required in the future.
20. XML docs should be used when writing interfaces or when a class/method is deemed sufficient in scope or complexity.
21. So-called [Magic Numbers](https://en.wikipedia.org/wiki/Magic_number_(programming)) should be defined as named constants before use (for example `for (int i = 56; i < 68; i++)` could read `for (int i = _currentAge; i < _retireAge; i++)`).
This may be ignored for trivial or syntactically common statements.
An [EditorConfig](https://editorconfig.org "EditorConfig homepage") file (`.editorconfig`) has been provided at the root of the runtime repository, enabling C# auto-formatting conforming to the above guidelines.
### Example File:
``ShaderCache.cs:``
```C#
using Ryujinx.Common.Configuration;
using Ryujinx.Common.Logging;
using Ryujinx.Graphics.GAL;
using Ryujinx.Graphics.Gpu.Engine.Threed;
using Ryujinx.Graphics.Gpu.Engine.Types;
using Ryujinx.Graphics.Gpu.Image;
using Ryujinx.Graphics.Gpu.Memory;
using Ryujinx.Graphics.Gpu.Shader.DiskCache;
using Ryujinx.Graphics.Shader;
using Ryujinx.Graphics.Shader.Translation;
using System;
using System.Collections.Generic;
using System.IO;
using System.Threading;
namespace Ryujinx.Graphics.Gpu.Shader
{
/// <summary>
/// Memory cache of shader code.
/// </summary>
class ShaderCache : IDisposable
{
/// <summary>
/// Default flags used on the shader translation process.
/// </summary>
public const TranslationFlags DefaultFlags = TranslationFlags.DebugMode;
private readonly struct TranslatedShader
{
public readonly CachedShaderStage Shader;
public readonly ShaderProgram Program;
public TranslatedShader(CachedShaderStage shader, ShaderProgram program)
{
Shader = shader;
Program = program;
}
}
...
/// <summary>
/// Processes the queue of shaders that must save their binaries to the disk cache.
/// </summary>
public void ProcessShaderCacheQueue()
{
// Check to see if the binaries for previously compiled shaders are ready, and save them out.
while (_programsToSaveQueue.TryPeek(out ProgramToSave programToSave))
{
ProgramLinkStatus result = programToSave.HostProgram.CheckProgramLink(false);
if (result != ProgramLinkStatus.Incomplete)
{
if (result == ProgramLinkStatus.Success)
{
_cacheWriter.AddShader(programToSave.CachedProgram, programToSave.BinaryCode ?? programToSave.HostProgram.GetBinary());
}
_programsToSaveQueue.Dequeue();
}
else
{
break;
}
}
}
}
}
```
For other languages, our current best guidance is consistency. When editing files, keep new code and changes consistent with the style in the files. For new files, it should conform to the style for that component. If there is a completely new component, anything that is reasonably broadly accepted is fine. | {
"source": "Ryubing/Ryujinx",
"title": "docs/coding-guidelines/coding-style.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/docs/coding-guidelines/coding-style.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 7543
} |
# Pull Request Guide
## Contributing Rules
All contributions to GreemDev/Ryujinx repository are made via pull requests (PRs) rather than through direct commits. The pull requests are reviewed and merged by the maintainers after a review and at least two approvals from the core development team.
To merge pull requests, you must have write permissions in the repository.
## Quick Code Review Rules
* Do not mix unrelated changes in one pull request. For example, a code style change should never be mixed with a bug fix.
* All changes should follow the existing code style. You can read more about our code style at [docs/coding-style](../coding-guidelines/coding-style.md).
* Adding external dependencies is to be avoided unless not doing so would introduce _significant_ complexity. Any dependency addition should be justified and discussed before merge.
* Use Draft pull requests for changes you are still working on but want early CI loop feedback. When you think your changes are ready for review, [change the status](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/changing-the-stage-of-a-pull-request) of your pull request.
* Rebase your changes when required or directly requested. Changes should always be commited on top of the upstream branch, not the other way around.
* If you are asked to make changes during the review process do them as a new commit.
* Only resolve GitHub conversations with reviewers once they have been addressed with a commit, or via a mutual agreement.
## Pull Request Ownership
Every pull request will automatically have labels and reviewers assigned. The label not only indicates the code segment which the change touches but also the area reviewers to be assigned.
If during the code review process a merge conflict occurs, the PR author is responsible for its resolution. Help will be provided if necessary although GitHub makes this easier by allowing simple conflict resolution using the [conflict-editor](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/resolving-a-merge-conflict-on-github).
## Pull Request Builds
When submitting a PR to the `Ryubing/Ryujinx` repository, various builds will run validating many areas to ensure we keep developer productivity and product quality high. These various workflows can be tracked in the [Actions](https://github.com/Ryubing/Ryujinx/actions) tab of the repository. If the job continues to completion, the build artifacts will be uploaded and posted as a comment in the PR discussion.
## Review Turnaround Times
Ryujinx is a project that is maintained by volunteers on a completely free-time basis. As such we cannot guarantee any particular timeframe for pull request review and approval. Weeks to months are common for larger (>500 line) PRs but there are some additional best practises to avoid review purgatory.
* Make the reviewers life easier wherever possible. Make use of descriptive commit names, code comments and XML docs where applicable.
* If there is disagreement on feedback then always lean on the side of the development team and community over any personal opinion.
* We're human. We miss things. We forget things. If there has been radio silence on your changes for a substantial period of time then do not hesitate to reach out directly either with something simple like "bump" on GitHub or a directly on Discord.
To re-iterate, make the review as easy for us as possible, respond promptly and be comfortable to interact directly with us for anything else.
## Merging Pull Requests
Anyone with write access can merge a pull request manually when the following conditions have been met:
* The PR has been approved by two reviewers and any other objections are addressed.
* You can request follow up reviews from the original reviewers if they requested changes.
* The PR successfully builds and passes all tests in the Continuous Integration (CI) system. In case of failures, refer to the [Actions](https://github.com/Ryubing/Ryujinx/actions) tab of your PR.
Typically, PRs are merged as one commit (squash merges). It creates a simpler history than a Merge Commit. "Special circumstances" are rare, and typically mean that there are a series of cleanly separated changes that will be too hard to understand if squashed together, or for some reason we want to preserve the ability to dissect them.
## Blocking Pull Request Merging
If for whatever reason you would like to move your pull request back to an in-progress status to avoid merging it in the current form, you can turn the PR into a draft PR by selecting the option under the reviewers section. Alternatively, you can do that by adding [WIP] prefix to the pull request title.
## Old Pull Request Policy
From time to time we will review older PRs and check them for relevance. If we find the PR is inactive or no longer applies, we will close it. As the PR owner, you can simply reopen it if you feel your closed PR needs our attention. | {
"source": "Ryubing/Ryujinx",
"title": "docs/workflow/pr-guide.md",
"url": "https://github.com/Ryubing/Ryujinx/blob/master/docs/workflow/pr-guide.md",
"date": "2024-10-06T20:58:11",
"stars": 8135,
"description": "Nintendo Switch emulator written in C#, originally created by gdkchan.",
"file_size": 4987
} |
<img src="./assets/web-ui.png" alt="Browser Use Web UI" width="full"/>
<br/>
[](https://github.com/browser-use/web-ui/stargazers)
[](https://link.browser-use.com/discord)
[](https://docs.browser-use.com)
[](https://x.com/warmshao)
This project builds upon the foundation of the [browser-use](https://github.com/browser-use/browser-use), which is designed to make websites accessible for AI agents.
We would like to officially thank [WarmShao](https://github.com/warmshao) for his contribution to this project.
**WebUI:** is built on Gradio and supports most of `browser-use` functionalities. This UI is designed to be user-friendly and enables easy interaction with the browser agent.
**Expanded LLM Support:** We've integrated support for various Large Language Models (LLMs), including: Google, OpenAI, Azure OpenAI, Anthropic, DeepSeek, Ollama etc. And we plan to add support for even more models in the future.
**Custom Browser Support:** You can use your own browser with our tool, eliminating the need to re-login to sites or deal with other authentication challenges. This feature also supports high-definition screen recording.
**Persistent Browser Sessions:** You can choose to keep the browser window open between AI tasks, allowing you to see the complete history and state of AI interactions.
<video src="https://github.com/user-attachments/assets/56bc7080-f2e3-4367-af22-6bf2245ff6cb" controls="controls">Your browser does not support playing this video!</video>
## Installation Guide
### Prerequisites
- Python 3.11 or higher
- Git (for cloning the repository)
### Option 1: Local Installation
Read the [quickstart guide](https://docs.browser-use.com/quickstart#prepare-the-environment) or follow the steps below to get started.
#### Step 1: Clone the Repository
```bash
git clone https://github.com/browser-use/web-ui.git
cd web-ui
```
#### Step 2: Set Up Python Environment
We recommend using [uv](https://docs.astral.sh/uv/) for managing the Python environment.
Using uv (recommended):
```bash
uv venv --python 3.11
```
Activate the virtual environment:
- Windows (Command Prompt):
```cmd
.venv\Scripts\activate
```
- Windows (PowerShell):
```powershell
.\.venv\Scripts\Activate.ps1
```
- macOS/Linux:
```bash
source .venv/bin/activate
```
#### Step 3: Install Dependencies
Install Python packages:
```bash
uv pip install -r requirements.txt
```
Install Playwright:
```bash
playwright install
```
#### Step 4: Configure Environment
1. Create a copy of the example environment file:
- Windows (Command Prompt):
```bash
copy .env.example .env
```
- macOS/Linux/Windows (PowerShell):
```bash
cp .env.example .env
```
2. Open `.env` in your preferred text editor and add your API keys and other settings
### Option 2: Docker Installation
#### Prerequisites
- Docker and Docker Compose installed
- [Docker Desktop](https://www.docker.com/products/docker-desktop/) (For Windows/macOS)
- [Docker Engine](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/install/) (For Linux)
#### Installation Steps
1. Clone the repository:
```bash
git clone https://github.com/browser-use/web-ui.git
cd web-ui
```
2. Create and configure environment file:
- Windows (Command Prompt):
```bash
copy .env.example .env
```
- macOS/Linux/Windows (PowerShell):
```bash
cp .env.example .env
```
Edit `.env` with your preferred text editor and add your API keys
3. Run with Docker:
```bash
# Build and start the container with default settings (browser closes after AI tasks)
docker compose up --build
```
```bash
# Or run with persistent browser (browser stays open between AI tasks)
CHROME_PERSISTENT_SESSION=true docker compose up --build
```
4. Access the Application:
- Web Interface: Open `http://localhost:7788` in your browser
- VNC Viewer (for watching browser interactions): Open `http://localhost:6080/vnc.html`
- Default VNC password: "youvncpassword"
- Can be changed by setting `VNC_PASSWORD` in your `.env` file
## Usage
### Local Setup
1. **Run the WebUI:**
After completing the installation steps above, start the application:
```bash
python webui.py --ip 127.0.0.1 --port 7788
```
2. WebUI options:
- `--ip`: The IP address to bind the WebUI to. Default is `127.0.0.1`.
- `--port`: The port to bind the WebUI to. Default is `7788`.
- `--theme`: The theme for the user interface. Default is `Ocean`.
- **Default**: The standard theme with a balanced design.
- **Soft**: A gentle, muted color scheme for a relaxed viewing experience.
- **Monochrome**: A grayscale theme with minimal color for simplicity and focus.
- **Glass**: A sleek, semi-transparent design for a modern appearance.
- **Origin**: A classic, retro-inspired theme for a nostalgic feel.
- **Citrus**: A vibrant, citrus-inspired palette with bright and fresh colors.
- **Ocean** (default): A blue, ocean-inspired theme providing a calming effect.
- `--dark-mode`: Enables dark mode for the user interface.
3. **Access the WebUI:** Open your web browser and navigate to `http://127.0.0.1:7788`.
4. **Using Your Own Browser(Optional):**
- Set `CHROME_PATH` to the executable path of your browser and `CHROME_USER_DATA` to the user data directory of your browser. Leave `CHROME_USER_DATA` empty if you want to use local user data.
- Windows
```env
CHROME_PATH="C:\Program Files\Google\Chrome\Application\chrome.exe"
CHROME_USER_DATA="C:\Users\YourUsername\AppData\Local\Google\Chrome\User Data"
```
> Note: Replace `YourUsername` with your actual Windows username for Windows systems.
- Mac
```env
CHROME_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"
CHROME_USER_DATA="/Users/YourUsername/Library/Application Support/Google/Chrome"
```
- Close all Chrome windows
- Open the WebUI in a non-Chrome browser, such as Firefox or Edge. This is important because the persistent browser context will use the Chrome data when running the agent.
- Check the "Use Own Browser" option within the Browser Settings.
5. **Keep Browser Open(Optional):**
- Set `CHROME_PERSISTENT_SESSION=true` in the `.env` file.
### Docker Setup
1. **Environment Variables:**
- All configuration is done through the `.env` file
- Available environment variables:
```
# LLM API Keys
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
# Browser Settings
CHROME_PERSISTENT_SESSION=true # Set to true to keep browser open between AI tasks
RESOLUTION=1920x1080x24 # Custom resolution format: WIDTHxHEIGHTxDEPTH
RESOLUTION_WIDTH=1920 # Custom width in pixels
RESOLUTION_HEIGHT=1080 # Custom height in pixels
# VNC Settings
VNC_PASSWORD=your_vnc_password # Optional, defaults to "vncpassword"
```
2. **Platform Support:**
- Supports both AMD64 and ARM64 architectures
- For ARM64 systems (e.g., Apple Silicon Macs), the container will automatically use the appropriate image
3. **Browser Persistence Modes:**
- **Default Mode (CHROME_PERSISTENT_SESSION=false):**
- Browser opens and closes with each AI task
- Clean state for each interaction
- Lower resource usage
- **Persistent Mode (CHROME_PERSISTENT_SESSION=true):**
- Browser stays open between AI tasks
- Maintains history and state
- Allows viewing previous AI interactions
- Set in `.env` file or via environment variable when starting container
4. **Viewing Browser Interactions:**
- Access the noVNC viewer at `http://localhost:6080/vnc.html`
- Enter the VNC password (default: "vncpassword" or what you set in VNC_PASSWORD)
- Direct VNC access available on port 5900 (mapped to container port 5901)
- You can now see all browser interactions in real-time
5. **Container Management:**
```bash
# Start with persistent browser
CHROME_PERSISTENT_SESSION=true docker compose up -d
# Start with default mode (browser closes after tasks)
docker compose up -d
# View logs
docker compose logs -f
# Stop the container
docker compose down
```
## Changelog
- [x] **2025/01/26:** Thanks to @vvincent1234. Now browser-use-webui can combine with DeepSeek-r1 to engage in deep thinking!
- [x] **2025/01/10:** Thanks to @casistack. Now we have Docker Setup option and also Support keep browser open between tasks.[Video tutorial demo](https://github.com/browser-use/web-ui/issues/1#issuecomment-2582511750).
- [x] **2025/01/06:** Thanks to @richard-devbot. A New and Well-Designed WebUI is released. [Video tutorial demo](https://github.com/warmshao/browser-use-webui/issues/1#issuecomment-2573393113). | {
"source": "browser-use/web-ui",
"title": "README.md",
"url": "https://github.com/browser-use/web-ui/blob/main/README.md",
"date": "2025-01-02T01:29:44",
"stars": 8094,
"description": "Run AI Agent in your browser.",
"file_size": 9133
} |
## Reporting Security Issues
If you believe you have found a security vulnerability in browser-use, please report it through coordinated disclosure.
**Please do not report security vulnerabilities through the repository issues, discussions, or pull requests.**
Instead, please open a new [Github security advisory](https://github.com/browser-use/web-ui/security/advisories/new).
Please include as much of the information listed below as you can to help me better understand and resolve the issue:
* The type of issue (e.g., buffer overflow, SQL injection, or cross-site scripting)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help me triage your report more quickly. | {
"source": "browser-use/web-ui",
"title": "SECURITY.md",
"url": "https://github.com/browser-use/web-ui/blob/main/SECURITY.md",
"date": "2025-01-02T01:29:44",
"stars": 8094,
"description": "Run AI Agent in your browser.",
"file_size": 1032
} |
# Microsoft Open Source Code of Conduct
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
Resources:
- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [[email protected]](mailto:[email protected]) with questions or concerns | {
"source": "microsoft/TRELLIS",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/microsoft/TRELLIS/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-12-02T05:44:19",
"stars": 7917,
"description": "Official repo for paper \"Structured 3D Latents for Scalable and Versatile 3D Generation\".",
"file_size": 443
} |
# TRELLIS-500K
TRELLIS-500K is a dataset of 500K 3D assets curated from [Objaverse(XL)](https://objaverse.allenai.org/), [ABO](https://amazon-berkeley-objects.s3.amazonaws.com/index.html), [3D-FUTURE](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-future), [HSSD](https://huggingface.co/datasets/hssd/hssd-models), and [Toys4k](https://github.com/rehg-lab/lowshot-shapebias/tree/main/toys4k), filtered based on aesthetic scores.
This dataset serves for 3D generation tasks.
The dataset is provided as csv files containing the 3D assets' metadata.
## Dataset Statistics
The following table summarizes the dataset's filtering and composition:
***NOTE: Some of the 3D assets lack text captions. Please filter out such assets if captions are required.***
| Source | Aesthetic Score Threshold | Filtered Size | With Captions |
|:-:|:-:|:-:|:-:|
| ObjaverseXL (sketchfab) | 5.5 | 168307 | 167638 |
| ObjaverseXL (github) | 5.5 | 311843 | 306790 |
| ABO | 4.5 | 4485 | 4390 |
| 3D-FUTURE | 4.5 | 9472 | 9291 |
| HSSD | 4.5 | 6670 | 6661 |
| All (training set) | - | 500777 | 494770 |
| Toys4k (evaluation set) | 4.5 | 3229 | 3180 |
## Dataset Location
The dataset is hosted on Hugging Face Datasets. You can preview the dataset at
[https://huggingface.co/datasets/JeffreyXiang/TRELLIS-500K](https://huggingface.co/datasets/JeffreyXiang/TRELLIS-500K)
There is no need to download the csv files manually. We provide toolkits to load and prepare the dataset.
## Dataset Toolkits
We provide [toolkits](dataset_toolkits) for data preparation.
### Step 1: Install Dependencies
```
. ./dataset_toolkits/setup.sh
```
### Step 2: Load Metadata
First, we need to load the metadata of the dataset.
```
python dataset_toolkits/build_metadata.py <SUBSET> --output_dir <OUTPUT_DIR> [--source <SOURCE>]
```
- `SUBSET`: The subset of the dataset to load. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
- `OUTPUT_DIR`: The directory to save the data.
- `SOURCE`: Required if `SUBSET` is `ObjaverseXL`. Options are `sketchfab` and `github`.
For example, to load the metadata of the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --source sketchfab --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 3: Download Data
Next, we need to download the 3D assets.
```
python dataset_toolkits/download.py <SUBSET> --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `SUBSET`: The subset of the dataset to download. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
- `OUTPUT_DIR`: The directory to save the data.
You can also specify the `RANK` and `WORLD_SIZE` of the current process if you are using multiple nodes for data preparation.
For example, to download the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
***NOTE: The example command below sets a large `WORLD_SIZE` for demonstration purposes. Only a small portion of the dataset will be downloaded.***
```
python dataset_toolkits/download.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab --world_size 160000
```
Some datasets may require interactive login to Hugging Face or manual downloading. Please follow the instructions given by the toolkits.
After downloading, update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 4: Render Multiview Images
Multiview images can be rendered with:
```
python dataset_toolkits/render.py <SUBSET> --output_dir <OUTPUT_DIR> [--num_views <NUM_VIEWS>] [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `SUBSET`: The subset of the dataset to render. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
- `OUTPUT_DIR`: The directory to save the data.
- `NUM_VIEWS`: The number of views to render. Default is 150.
- `RANK` and `WORLD_SIZE`: Multi-node configuration.
For example, to render the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/render.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
Don't forget to update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 5: Voxelize 3D Models
We can voxelize the 3D models with:
```
python dataset_toolkits/voxelize.py <SUBSET> --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `SUBSET`: The subset of the dataset to voxelize. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
- `OUTPUT_DIR`: The directory to save the data.
- `RANK` and `WORLD_SIZE`: Multi-node configuration.
For example, to voxelize the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/voxelize.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
Then update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 6: Extract DINO Features
To prepare the training data for SLat VAE, we need to extract DINO features from multiview images and aggregate them into sparse voxel grids.
```
python dataset_toolkits/extract_features.py --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `OUTPUT_DIR`: The directory to save the data.
- `RANK` and `WORLD_SIZE`: Multi-node configuration.
For example, to extract DINO features from the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/extract_feature.py --output_dir datasets/ObjaverseXL_sketchfab
```
Then update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 7: Encode Sparse Structures
Encoding the sparse structures into latents to train the first stage generator:
```
python dataset_toolkits/encode_ss_latent.py --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `OUTPUT_DIR`: The directory to save the data.
- `RANK` and `WORLD_SIZE`: Multi-node configuration.
For example, to encode the sparse structures into latents for the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/encode_ss_latent.py --output_dir datasets/ObjaverseXL_sketchfab
```
Then update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 8: Encode SLat
Encoding SLat for second stage generator training:
```
python dataset_toolkits/encode_latent.py --output_dir <OUTPUT_DIR> [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `OUTPUT_DIR`: The directory to save the data.
- `RANK` and `WORLD_SIZE`: Multi-node configuration.
For example, to encode SLat for the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/encode_latent.py --output_dir datasets/ObjaverseXL_sketchfab
```
Then update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
### Step 9: Render Image Conditions
To train the image conditioned generator, we need to render image conditions with augmented views.
```
python dataset_toolkits/render_cond.py <SUBSET> --output_dir <OUTPUT_DIR> [--num_views <NUM_VIEWS>] [--rank <RANK> --world_size <WORLD_SIZE>]
```
- `SUBSET`: The subset of the dataset to render. Options are `ObjaverseXL`, `ABO`, `3D-FUTURE`, `HSSD`, and `Toys4k`.
- `OUTPUT_DIR`: The directory to save the data.
- `NUM_VIEWS`: The number of views to render. Default is 24.
- `RANK` and `WORLD_SIZE`: Multi-node configuration.
For example, to render image conditions for the ObjaverseXL (sketchfab) subset and save it to `datasets/ObjaverseXL_sketchfab`, we can run:
```
python dataset_toolkits/render_cond.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
```
Then update the metadata file with:
```
python dataset_toolkits/build_metadata.py ObjaverseXL --output_dir datasets/ObjaverseXL_sketchfab
``` | {
"source": "microsoft/TRELLIS",
"title": "DATASET.md",
"url": "https://github.com/microsoft/TRELLIS/blob/main/DATASET.md",
"date": "2024-12-02T05:44:19",
"stars": 7917,
"description": "Official repo for paper \"Structured 3D Latents for Scalable and Versatile 3D Generation\".",
"file_size": 8304
} |
<img src="assets/logo.webp" width="100%" align="center">
<h1 align="center">Structured 3D Latents<br>for Scalable and Versatile 3D Generation</h1>
<p align="center"><a href="https://arxiv.org/abs/2412.01506"><img src='https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white' alt='arXiv'></a>
<a href='https://trellis3d.github.io'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
<a href='https://huggingface.co/spaces/JeffreyXiang/TRELLIS'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Live_Demo-blue'></a>
</p>
<p align="center"><img src="assets/teaser.png" width="100%"></p>
<span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> is a large 3D asset generation model. It takes in text or image prompts and generates high-quality 3D assets in various formats, such as Radiance Fields, 3D Gaussians, and meshes. The cornerstone of <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> is a unified Structured LATent (<span style="font-size: 16px; font-weight: 600;">SL</span><span style="font-size: 12px; font-weight: 700;">AT</span>) representation that allows decoding to different output formats and Rectified Flow Transformers tailored for <span style="font-size: 16px; font-weight: 600;">SL</span><span style="font-size: 12px; font-weight: 700;">AT</span> as the powerful backbones. We provide large-scale pre-trained models with up to 2 billion parameters on a large 3D asset dataset of 500K diverse objects. <span style="font-size: 16px; font-weight: 600;">T</span><span style="font-size: 12px; font-weight: 700;">RELLIS</span> significantly surpasses existing methods, including recent ones at similar scales, and showcases flexible output format selection and local 3D editing capabilities which were not offered by previous models.
***Check out our [Project Page](https://trellis3d.github.io) for more videos and interactive demos!***
<!-- Features -->
## 🌟 Features
- **High Quality**: It produces diverse 3D assets at high quality with intricate shape and texture details.
- **Versatility**: It takes text or image prompts and can generate various final 3D representations including but not limited to *Radiance Fields*, *3D Gaussians*, and *meshes*, accommodating diverse downstream requirements.
- **Flexible Editing**: It allows for easy editings of generated 3D assets, such as generating variants of the same object or local editing of the 3D asset.
<!-- Updates -->
## ⏩ Updates
**12/26/2024**
- Release [**TRELLIS-500K**](https://github.com/microsoft/TRELLIS#-dataset) dataset and toolkits for data preparation.
**12/18/2024**
- Implementation of multi-image conditioning for TRELLIS-image model. ([#7](https://github.com/microsoft/TRELLIS/issues/7)). This is based on tuning-free algorithm without training a specialized model, so it may not give the best results for all input images.
- Add Gaussian export in `app.py` and `example.py`. ([#40](https://github.com/microsoft/TRELLIS/issues/40))
<!-- TODO List -->
## 🚧 TODO List
- [x] Release inference code and TRELLIS-image-large model
- [x] Release dataset and dataset toolkits
- [ ] Release TRELLIS-text model series
- [ ] Release training code
<!-- Installation -->
## 📦 Installation
### Prerequisites
- **System**: The code is currently tested only on **Linux**. For windows setup, you may refer to [#3](https://github.com/microsoft/TRELLIS/issues/3) (not fully tested).
- **Hardware**: An NVIDIA GPU with at least 16GB of memory is necessary. The code has been verified on NVIDIA A100 and A6000 GPUs.
- **Software**:
- The [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit-archive) is needed to compile certain submodules. The code has been tested with CUDA versions 11.8 and 12.2.
- [Conda](https://docs.anaconda.com/miniconda/install/#quick-command-line-install) is recommended for managing dependencies.
- Python version 3.8 or higher is required.
### Installation Steps
1. Clone the repo:
```sh
git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git
cd TRELLIS
```
2. Install the dependencies:
**Before running the following command there are somethings to note:**
- By adding `--new-env`, a new conda environment named `trellis` will be created. If you want to use an existing conda environment, please remove this flag.
- By default the `trellis` environment will use pytorch 2.4.0 with CUDA 11.8. If you want to use a different version of CUDA (e.g., if you have CUDA Toolkit 12.2 installed and do not want to install another 11.8 version for submodule compilation), you can remove the `--new-env` flag and manually install the required dependencies. Refer to [PyTorch](https://pytorch.org/get-started/previous-versions/) for the installation command.
- If you have multiple CUDA Toolkit versions installed, `PATH` should be set to the correct version before running the command. For example, if you have CUDA Toolkit 11.8 and 12.2 installed, you should run `export PATH=/usr/local/cuda-11.8/bin:$PATH` before running the command.
- By default, the code uses the `flash-attn` backend for attention. For GPUs do not support `flash-attn` (e.g., NVIDIA V100), you can remove the `--flash-attn` flag to install `xformers` only and set the `ATTN_BACKEND` environment variable to `xformers` before running the code. See the [Minimal Example](#minimal-example) for more details.
- The installation may take a while due to the large number of dependencies. Please be patient. If you encounter any issues, you can try to install the dependencies one by one, specifying one flag at a time.
- If you encounter any issues during the installation, feel free to open an issue or contact us.
Create a new conda environment named `trellis` and install the dependencies:
```sh
. ./setup.sh --new-env --basic --xformers --flash-attn --diffoctreerast --spconv --mipgaussian --kaolin --nvdiffrast
```
The detailed usage of `setup.sh` can be found by running `. ./setup.sh --help`.
```sh
Usage: setup.sh [OPTIONS]
Options:
-h, --help Display this help message
--new-env Create a new conda environment
--basic Install basic dependencies
--xformers Install xformers
--flash-attn Install flash-attn
--diffoctreerast Install diffoctreerast
--vox2seq Install vox2seq
--spconv Install spconv
--mipgaussian Install mip-splatting
--kaolin Install kaolin
--nvdiffrast Install nvdiffrast
--demo Install all dependencies for demo
```
<!-- Pretrained Models -->
## 🤖 Pretrained Models
We provide the following pretrained models:
| Model | Description | #Params | Download |
| --- | --- | --- | --- |
| TRELLIS-image-large | Large image-to-3D model | 1.2B | [Download](https://huggingface.co/JeffreyXiang/TRELLIS-image-large) |
| TRELLIS-text-base | Base text-to-3D model | 342M | Coming Soon |
| TRELLIS-text-large | Large text-to-3D model | 1.1B | Coming Soon |
| TRELLIS-text-xlarge | Extra-large text-to-3D model | 2.0B | Coming Soon |
The models are hosted on Hugging Face. You can directly load the models with their repository names in the code:
```python
TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
```
If you prefer loading the model from local, you can download the model files from the links above and load the model with the folder path (folder structure should be maintained):
```python
TrellisImageTo3DPipeline.from_pretrained("/path/to/TRELLIS-image-large")
```
<!-- Usage -->
## 💡 Usage
### Minimal Example
Here is an [example](example.py) of how to use the pretrained models for 3D asset generation.
```python
import os
# os.environ['ATTN_BACKEND'] = 'xformers' # Can be 'flash-attn' or 'xformers', default is 'flash-attn'
os.environ['SPCONV_ALGO'] = 'native' # Can be 'native' or 'auto', default is 'auto'.
# 'auto' is faster but will do benchmarking at the beginning.
# Recommended to set to 'native' if run only once.
import imageio
from PIL import Image
from trellis.pipelines import TrellisImageTo3DPipeline
from trellis.utils import render_utils, postprocessing_utils
# Load a pipeline from a model folder or a Hugging Face model hub.
pipeline = TrellisImageTo3DPipeline.from_pretrained("JeffreyXiang/TRELLIS-image-large")
pipeline.cuda()
# Load an image
image = Image.open("assets/example_image/T.png")
# Run the pipeline
outputs = pipeline.run(
image,
seed=1,
# Optional parameters
# sparse_structure_sampler_params={
# "steps": 12,
# "cfg_strength": 7.5,
# },
# slat_sampler_params={
# "steps": 12,
# "cfg_strength": 3,
# },
)
# outputs is a dictionary containing generated 3D assets in different formats:
# - outputs['gaussian']: a list of 3D Gaussians
# - outputs['radiance_field']: a list of radiance fields
# - outputs['mesh']: a list of meshes
# Render the outputs
video = render_utils.render_video(outputs['gaussian'][0])['color']
imageio.mimsave("sample_gs.mp4", video, fps=30)
video = render_utils.render_video(outputs['radiance_field'][0])['color']
imageio.mimsave("sample_rf.mp4", video, fps=30)
video = render_utils.render_video(outputs['mesh'][0])['normal']
imageio.mimsave("sample_mesh.mp4", video, fps=30)
# GLB files can be extracted from the outputs
glb = postprocessing_utils.to_glb(
outputs['gaussian'][0],
outputs['mesh'][0],
# Optional parameters
simplify=0.95, # Ratio of triangles to remove in the simplification process
texture_size=1024, # Size of the texture used for the GLB
)
glb.export("sample.glb")
# Save Gaussians as PLY files
outputs['gaussian'][0].save_ply("sample.ply")
```
After running the code, you will get the following files:
- `sample_gs.mp4`: a video showing the 3D Gaussian representation
- `sample_rf.mp4`: a video showing the Radiance Field representation
- `sample_mesh.mp4`: a video showing the mesh representation
- `sample.glb`: a GLB file containing the extracted textured mesh
- `sample.ply`: a PLY file containing the 3D Gaussian representation
### Web Demo
[app.py](app.py) provides a simple web demo for 3D asset generation. Since this demo is based on [Gradio](https://gradio.app/), additional dependencies are required:
```sh
. ./setup.sh --demo
```
After installing the dependencies, you can run the demo with the following command:
```sh
python app.py
```
Then, you can access the demo at the address shown in the terminal.
***The web demo is also available on [Hugging Face Spaces](https://huggingface.co/spaces/JeffreyXiang/TRELLIS)!***
<!-- Dataset -->
## 📚 Dataset
We provide **TRELLIS-500K**, a large-scale dataset containing 500K 3D assets curated from [Objaverse(XL)](https://objaverse.allenai.org/), [ABO](https://amazon-berkeley-objects.s3.amazonaws.com/index.html), [3D-FUTURE](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-future), [HSSD](https://huggingface.co/datasets/hssd/hssd-models), and [Toys4k](https://github.com/rehg-lab/lowshot-shapebias/tree/main/toys4k), filtered based on aesthetic scores. Please refer to the [dataset README](DATASET.md) for more details.
<!-- License -->
## ⚖️ License
TRELLIS models and the majority of the code are licensed under the [MIT License](LICENSE). The following submodules may have different licenses:
- [**diffoctreerast**](https://github.com/JeffreyXiang/diffoctreerast): We developed a CUDA-based real-time differentiable octree renderer for rendering radiance fields as part of this project. This renderer is derived from the [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization) project and is available under the [LICENSE](https://github.com/JeffreyXiang/diffoctreerast/blob/master/LICENSE).
- [**Modified Flexicubes**](https://github.com/MaxtirError/FlexiCubes): In this project, we used a modified version of [Flexicubes](https://github.com/nv-tlabs/FlexiCubes) to support vertex attributes. This modified version is licensed under the [LICENSE](https://github.com/nv-tlabs/FlexiCubes/blob/main/LICENSE.txt).
<!-- Citation -->
## 📜 Citation
If you find this work helpful, please consider citing our paper:
```bibtex
@article{xiang2024structured,
title = {Structured 3D Latents for Scalable and Versatile 3D Generation},
author = {Xiang, Jianfeng and Lv, Zelong and Xu, Sicheng and Deng, Yu and Wang, Ruicheng and Zhang, Bowen and Chen, Dong and Tong, Xin and Yang, Jiaolong},
journal = {arXiv preprint arXiv:2412.01506},
year = {2024}
}
``` | {
"source": "microsoft/TRELLIS",
"title": "README.md",
"url": "https://github.com/microsoft/TRELLIS/blob/main/README.md",
"date": "2024-12-02T05:44:19",
"stars": 7917,
"description": "Official repo for paper \"Structured 3D Latents for Scalable and Versatile 3D Generation\".",
"file_size": 13010
} |
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.9 BLOCK -->
## Security
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet) and [Xamarin](https://github.com/xamarin).
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/security.md/definition), please report it to us as described below.
## Reporting Security Issues
**Please do not report security vulnerabilities through public GitHub issues.**
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/security.md/msrc/create-report).
If you prefer to submit without logging in, send email to [[email protected]](mailto:[email protected]). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/security.md/msrc/pgp).
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://www.microsoft.com/msrc).
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
* Full paths of source file(s) related to the manifestation of the issue
* The location of the affected source code (tag/branch/commit or direct URL)
* Any special configuration required to reproduce the issue
* Step-by-step instructions to reproduce the issue
* Proof-of-concept or exploit code (if possible)
* Impact of the issue, including how an attacker might exploit the issue
This information will help us triage your report more quickly.
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/security.md/msrc/bounty) page for more details about our active programs.
## Preferred Languages
We prefer all communications to be in English.
## Policy
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/security.md/cvd).
<!-- END MICROSOFT SECURITY.MD BLOCK --> | {
"source": "microsoft/TRELLIS",
"title": "SECURITY.md",
"url": "https://github.com/microsoft/TRELLIS/blob/main/SECURITY.md",
"date": "2024-12-02T05:44:19",
"stars": 7917,
"description": "Official repo for paper \"Structured 3D Latents for Scalable and Versatile 3D Generation\".",
"file_size": 2655
} |
# TODO: The maintainer of this repo has not yet edited this file
**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project?
- **No CSS support:** Fill out this template with information about how to file issues and get help.
- **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps.
- **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide.
*Then remove this first heading from this SUPPORT.MD file before publishing your repo.*
# Support
## How to file issues and get help
This project uses GitHub Issues to track bugs and feature requests. Please search the existing
issues before filing new issues to avoid duplicates. For new issues, file your bug or
feature request as a new Issue.
For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE
FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER
CHANNEL. WHERE WILL YOU HELP PEOPLE?**.
## Microsoft Support Policy
Support for this **PROJECT or PRODUCT** is limited to the resources listed above. | {
"source": "microsoft/TRELLIS",
"title": "SUPPORT.md",
"url": "https://github.com/microsoft/TRELLIS/blob/main/SUPPORT.md",
"date": "2024-12-02T05:44:19",
"stars": 7917,
"description": "Official repo for paper \"Structured 3D Latents for Scalable and Versatile 3D Generation\".",
"file_size": 1242
} |
# Open Source License Attribution
Cosmos uses Open Source components. You can find the details of these open-source projects along with license information below, sorted alphabetically.
We are grateful to the developers for their contributions to open source and acknowledge these below.
## Better-Profanity - [MIT License](https://github.com/snguyenthanh/better_profanity/blob/master/LICENSE)
```
Copyright (c) 2018 The Python Packaging Authority
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## FFmpeg - [FFMPEG License](https://github.com/FFmpeg/FFmpeg/blob/master/LICENSE.md)
```
# License
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
FFmpeg.
Some optional parts of FFmpeg are licensed under the GNU General Public License
version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of
these parts are used by default, you have to explicitly pass `--enable-gpl` to
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are:
- libpostproc
- optional x86 optimization in the files
- `libavcodec/x86/flac_dsp_gpl.asm`
- `libavcodec/x86/idct_mmx.c`
- `libavfilter/x86/vf_removegrain.asm`
- the following building and testing tools
- `compat/solaris/make_sunver.pl`
- `doc/t2h.pm`
- `doc/texi2pod.pl`
- `libswresample/tests/swresample.c`
- `tests/checkasm/*`
- `tests/tiny_ssim.c`
- the following filters in libavfilter:
- `signature_lookup.c`
- `vf_blackframe.c`
- `vf_boxblur.c`
- `vf_colormatrix.c`
- `vf_cover_rect.c`
- `vf_cropdetect.c`
- `vf_delogo.c`
- `vf_eq.c`
- `vf_find_rect.c`
- `vf_fspp.c`
- `vf_histeq.c`
- `vf_hqdn3d.c`
- `vf_kerndeint.c`
- `vf_lensfun.c` (GPL version 3 or later)
- `vf_mcdeint.c`
- `vf_mpdecimate.c`
- `vf_nnedi.c`
- `vf_owdenoise.c`
- `vf_perspective.c`
- `vf_phase.c`
- `vf_pp.c`
- `vf_pp7.c`
- `vf_pullup.c`
- `vf_repeatfields.c`
- `vf_sab.c`
- `vf_signature.c`
- `vf_smartblur.c`
- `vf_spp.c`
- `vf_stereo3d.c`
- `vf_super2xsai.c`
- `vf_tinterlace.c`
- `vf_uspp.c`
- `vf_vaguedenoiser.c`
- `vsrc_mptestsrc.c`
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
the configure parameter `--enable-version3` will activate this licensing option
for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,
`COPYING.GPLv3` to learn the exact legal terms that apply in this case.
There are a handful of files under other licensing terms, namely:
* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and
`libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for
licensing details. Specifically note that you must credit the IJG in the
documentation accompanying your program if you only distribute executables.
You must also indicate any changes including additions and deletions to
those three files in the documentation.
* `tests/reference.pnm` is under the expat license.
## External libraries
FFmpeg can be combined with a number of external libraries, which sometimes
affect the licensing of binaries resulting from the combination.
### Compatible libraries
The following libraries are under GPL version 2:
- avisynth
- frei0r
- libcdio
- libdavs2
- librubberband
- libvidstab
- libx264
- libx265
- libxavs
- libxavs2
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing `--enable-gpl` to configure.
The following libraries are under LGPL version 3:
- gmp
- libaribb24
- liblensfun
When combining them with FFmpeg, use the configure option `--enable-version3` to
upgrade FFmpeg to the LGPL v3.
The VMAF, mbedTLS, RK MPI, OpenCORE and VisualOn libraries are under the Apache License
2.0. That license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing `--enable-version3` to configure.
The smbclient library is under the GPL v3, to combine it with FFmpeg,
the options `--enable-gpl` and `--enable-version3` have to be passed to
configure to upgrade FFmpeg to the GPL v3.
### Incompatible libraries
There are certain libraries you can combine with FFmpeg whose licenses are not
compatible with the GPL and/or the LGPL. If you wish to enable these
libraries, even in circumstances that their license may be incompatible, pass
`--enable-nonfree` to configure. This will cause the resulting binary to be
unredistributable.
The Fraunhofer FDK AAC and OpenSSL libraries are under licenses which are
incompatible with the GPLv2 and v3. To the best of our knowledge, they are
compatible with the LGPL.
```
## Hydra-core [MIT License](https://github.com/facebookresearch/hydra/blob/main/LICENSE)
```
MIT License
Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## ImageIo - [BSD 2-Clause "Simplified" License](https://github.com/imageio/imageio/blob/master/LICENSE)
```
Copyright (c) 2014-2022, imageio developers
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
```
## Iopath - [MIT License](https://github.com/facebookresearch/iopath/blob/main/LICENSE)
```
MIT License
Copyright (c) Facebook, Inc. and its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## Loguru - [MIT License](https://github.com/Delgan/loguru/blob/master/LICENSE)
```
MIT License
Copyright (c) 2017
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## Mediapy - [Apache License 2.0](https://github.com/google/mediapy/blob/main/LICENSE)
```
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## Nltk - [Apache License 2.0](https://github.com/nltk/nltk/blob/develop/LICENSE.txt)
```
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## PEFT - [Apache License 2.0](https://github.com/huggingface/peft/blob/main/LICENSE)
```
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## Pillow - [MIT License](https://github.com/python-pillow/Pillow/blob/main/LICENSE)
```
The Python Imaging Library (PIL) is
Copyright © 1997-2011 by Secret Labs AB
Copyright © 1995-2011 by Fredrik Lundh and contributors
Pillow is the friendly PIL fork. It is
Copyright © 2010 by Jeffrey A. Clark and contributors
Like PIL, Pillow is licensed under the open source MIT-CMU License:
By obtaining, using, and/or copying this software and/or its associated
documentation, you agree that you have read, understood, and will comply
with the following terms and conditions:
Permission to use, copy, modify and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appears in all copies, and that
both that copyright notice and this permission notice appear in supporting
documentation, and that the name of Secret Labs AB or the author not be
used in advertising or publicity pertaining to distribution of the software
without specific, written prior permission.
SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS.
IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR ANY SPECIAL,
INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
```
## PyAV - [BSD 3-Clause "New" or "Revised" License](https://github.com/PyAV-Org/PyAV/blob/main/LICENSE.txt)
```
Copyright retained by original committers. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the project nor the names of its contributors may be
used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
```
## Pytorch_Retinaface - [MIT License](https://github.com/biubug6/Pytorch_Retinaface/blob/master/LICENSE.MIT)
```
MIT License
Copyright (c) 2019
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
```
## Sentencepiece - [Apache License 2.0](https://github.com/google/sentencepiece/blob/master/LICENSE)
```
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## Termcolor - [MIT License](https://github.com/termcolor/termcolor/blob/main/COPYING.txt)
```
Copyright (c) 2008-2011 Volvox Development Team
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
```
## Transformers [Apache License 2.0](https://github.com/huggingface/transformers/blob/main/LICENSE)
```
Copyright 2018- The Hugging Face team. All rights reserved.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
``` | {
"source": "NVIDIA/Cosmos",
"title": "ATTRIBUTIONS.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/ATTRIBUTIONS.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 77232
} |
# How to Contribute
We'd love to receive your patches and contributions. Please keep your PRs as draft until such time that you would like us to review them.
## Code Reviews
All submissions, including submissions by project members, require review. We use GitHub pull requests for this purpose. Consult
[GitHub Help](https://help.github.com/articles/about-pull-requests/) for more information on using pull requests.
## Pipeline
Ensure you run the linter prior to submitting your pull request and the CI-CD pipeline is green before removing the draft designation.
```bash
./cosmos1/scripts/format.sh
```
## Signing Your Work
* We require that all contributors "sign-off" on their commits. This certifies that the contribution is your original work, or you have rights to submit it under the same license, or a compatible license.
* Any contribution which contains commits that are not Signed-Off will not be accepted.
* To sign off on a commit you simply use the `--signoff` (or `-s`) option when committing your changes:
```bash
$ git commit -s -m "Add cool feature."
```
This will append the following to your commit message:
```
Signed-off-by: Your Name <[email protected]>
```
* Full text of the DCO:
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
```
```
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or
(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or
(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
``` | {
"source": "NVIDIA/Cosmos",
"title": "CONTRIBUTING.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/CONTRIBUTING.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 2669
} |
# Cosmos Installation
We have only tested the installation with Ubuntu 24.04, 22.04, and 20.04.
1. Install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
2. Clone the repository.
```bash
git clone [email protected]:NVIDIA/Cosmos.git
cd Cosmos
```
3. Build a Docker image using `Dockerfile` and run the Docker container.
```bash
docker build -t cosmos .
docker run -d --name cosmos_container --gpus all --ipc=host -it -v $(pwd):/workspace cosmos
docker attach cosmos_container
``` | {
"source": "NVIDIA/Cosmos",
"title": "INSTALL.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/INSTALL.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 560
} |

--------------------------------------------------------------------------------
### [Website](https://www.nvidia.com/en-us/ai/cosmos/) | [HuggingFace](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) | [GPU-free Preview](https://build.nvidia.com/explore/discover) | [Paper](https://arxiv.org/abs/2501.03575) | [Paper Website](https://research.nvidia.com/labs/dir/cosmos1/)
[NVIDIA Cosmos](https://www.nvidia.com/cosmos/) is a developer-first world foundation model platform designed to help Physical AI developers build their Physical AI systems better and faster. Cosmos contains
1. pre-trained models, available via [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/) that allows commercial use of the models for free
2. training scripts under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0), offered through [NVIDIA Nemo Framework](https://github.com/NVIDIA/NeMo) for post-training the models for various downstream Physical AI applications
Details of the platform is described in the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai). Preview access is avaiable at [build.nvidia.com](https://build.nvidia.com).
## Key Features
- [Pre-trained Diffusion-based world foundation models](cosmos1/models/diffusion/README.md) for Text2World and Video2World generation where a user can generate visual simulation based on text prompts and video prompts.
- [Pre-trained Autoregressive-based world foundation models](cosmos1/models/autoregressive/README.md) for Video2World generation where a user can generate visual simulation based on video prompts and optional text prompts.
- [Video tokenizers](cosmos1/models/tokenizer) for tokenizing videos into continuous tokens (latent vectors) and discrete tokens (integers) efficiently and effectively.
- Video curation pipeline for building your own video dataset. [Coming soon]
- [Post-training scripts](cosmos1/models/POST_TRAINING.md) via NeMo Framework to post-train the pre-trained world foundation models for various Physical AI setup.
- Pre-training scripts via NeMo Framework for building your own world foundation model. [[Diffusion](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/diffusion)] [[Autoregressive](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/multimodal_autoregressive)] [[Tokenizer](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/diffusion/vae)].
## Model Family
| Model name | Description | Try it out |
|------------|----------|----------|
| [Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
| [Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
| [Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
| [Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
| [Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
| [Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
| [Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
| [Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
| [Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail) | Guardrail contains pre-Guard and post-Guard for safe use | Embedded in model inference scripts |
## Example Usage
### Inference
Follow the [Cosmos Installation Guide](INSTALL.md) to setup the docker. For inference with the pretrained models, please refer to [Cosmos Diffusion Inference](cosmos1/models/diffusion/README.md) and [Cosmos Autoregressive Inference](cosmos1/models/autoregressive/README.md).
The code snippet below provides a gist of the inference usage.
```bash
PROMPT="A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. \
The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. \
A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, \
suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. \
The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of \
field that keeps the focus on the robot while subtly blurring the background for a cinematic effect."
# Example using 7B model
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \
--prompt "$PROMPT" \
--offload_prompt_upsampler \
--video_save_name Cosmos-1.0-Diffusion-7B-Text2World
```
<video src="https://github.com/user-attachments/assets/db7bebfe-5314-40a6-b045-4f6ce0a87f2a">
Your browser does not support the video tag.
</video>
We also offer [multi-GPU inference](cosmos1/models/diffusion/nemo/inference/README.md) support for Diffusion Text2World WFM models through NeMo Framework.
### Post-training
NeMo Framework provides GPU accelerated post-training with general post-training for both [diffusion](cosmos1/models/diffusion/nemo/post_training/README.md) and [autoregressive](cosmos1/models/autoregressive/nemo/post_training/README.md) models, with other types of post-training coming soon.
## License and Contact
This project will download and install additional third-party open source software projects. Review the license terms of these open source projects before use.
NVIDIA Cosmos source code is released under the [Apache 2 License](https://www.apache.org/licenses/LICENSE-2.0).
NVIDIA Cosmos models are released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). For a custom license, please contact [[email protected]](mailto:[email protected]). | {
"source": "NVIDIA/Cosmos",
"title": "README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 7207
} |
# Release Cadence
| Version | Description | Date |
|------------|----------|----------|
| [v1.0](release_notes/v0p1.md) | Initial diffusion and autoregressive WFMs release | 2025-01-06 |
| [v0.1](release_notes/v0p1.md) | Initial tokenizer release | 2024-11-06 | | {
"source": "NVIDIA/Cosmos",
"title": "RELEASE.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/RELEASE.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 263
} |
# Checkpoint directory
Follow our instructions for downloading checkpoints in [Cosmos Diffusion Inference](../cosmos1/models/diffusion/README.md#download-checkpoints) and [Cosmos Autoregressive Inference](../cosmos1/models/autoregressive/README.md). Cosmos checkpoints will be downloaded to this directory. | {
"source": "NVIDIA/Cosmos",
"title": "checkpoints/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/checkpoints/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 307
} |
# Release note
- Cosmos 0.1 was released with the [Cosmos Tokenizer Webage](https://research.nvidia.com/labs/dir/cosmos-tokenizer/).
- 10 tokenizers were released in the [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) as shown in the table below.
- Inference scripts for the models were released in the [Cosmos Tokenizer repository](https://github.com/NVIDIA/Cosmos-Tokenizer).
## Released Models
| Item | Model name | Description | Try it out |
|--|------------|----------|----------|
|1| [Cosmos-0.1-Tokenizer-CI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI8x8) | Continuous image tokenizer with 8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|2| [Cosmos-0.1-Tokenizer-CI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CI16x16) | Continuous image tokenizer with 16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|3| [Cosmos-0.1-Tokenizer-DI8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI8x8) | Discrete image tokenizer with 8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|4| [Cosmos-0.1-Tokenizer-DI16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DI16x16) | Discrete image tokenizer with 16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|5| [Cosmos-0.1-Tokenizer-CV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV4x8x8) | Continuous video tokenizer with 4x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|6| [Cosmos-0.1-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x8x8) | Continuous video tokenizer with 8x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|7| [Cosmos-0.1-Tokenizer-CV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-CV8x16x16) | Continuous video tokenizer with 8x16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|8| [Cosmos-0.1-Tokenizer-DV4x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV4x8x8) | Discrete video tokenizer with 4x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|9| [Cosmos-0.1-Tokenizer-DV8x8x8](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x8x8) | Discrete video tokenizer with 8x8x8 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) |
|10| [Cosmos-0.1-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-Tokenizer-DV8x16x16) | Discrete video tokenizer with 8x16x16 compression ratio | [Inference]([cosmos1/models/diffusion/README.md](https://github.com/NVIDIA/Cosmos-Tokenizer)) | | {
"source": "NVIDIA/Cosmos",
"title": "release_notes/v0p1.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/release_notes/v0p1.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 3021
} |
# Release Notes
# [02/10/2025](https://github.com/NVIDIA/Cosmos/commit/868ff171b9d676c53e094c4324a45a5f06d749e2)
- Cosmos Tokenizer inference and post-training support
- Cosmos Video2World post-training support
# [01/27/2025](https://github.com/NVIDIA/Cosmos/commit/c82c9dc6f9a2f046033d0a26ec525bc389b641ef)
- Stability and safety improvements
# [01/09/2025](https://github.com/NVIDIA/Cosmos/commit/a6e2fdd49053ae75836cedc2a99c7c84bc1c8c1b)
- Support [General Post-Training](../cosmos1/models/POST_TRAINING.md) through NeMO
# [01/06/2025](https://github.com/NVIDIA/Cosmos/commit/00d50f897a111069d43386e626aecb2167259bca)
- Initial release of Cosmos 1.0 along with the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai)
- 13 models were released in the [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6) as shown in the table below.
- Inference scripts for the models were released in the [Cosmos repository](https://github.com/NVIDIA/Cosmos).
| Item | Model name | Description | Try it out |
|--|------------|----------|----------|
|1| [Cosmos-1.0-Diffusion-7B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
|2| [Cosmos-1.0-Diffusion-14B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Text2World) | Text to visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
|3| [Cosmos-1.0-Diffusion-7B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
|4| [Cosmos-1.0-Diffusion-14B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-14B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/diffusion/README.md) |
|5| [Cosmos-1.0-Autoregressive-4B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-4B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
|6| [Cosmos-1.0-Autoregressive-12B](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-12B) | Future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
|7| [Cosmos-1.0-Autoregressive-5B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-5B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
|8| [Cosmos-1.0-Autoregressive-13B-Video2World](https://huggingface.co/nvidia/Cosmos-1.0-Autoregressive-13B-Video2World) | Video + Text based future visual world generation | [Inference](cosmos1/models/autoregressive/README.md) |
|9| [Cosmos-1.0-Tokenizer-CV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8) | Continuous video tokenizer with 8x8x8 compression ratio | [Inference](cosmos1/models/diffusion/README.md) |
|10| [Cosmos-1.0-Tokenizer-DV8x16x16](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-DV8x16x16) | Discrete video tokenizer with 16x8x8 compression ratio | [Inference](cosmos1/models/autoregressive/README.md) |
|11| [Cosmos-1.0-PromptUpsampler-12B-Text2World](https://huggingface.co/nvidia/Cosmos-1.0-Prompt-Upsampler-12B-Text2World) | Prompt upsampler for Text2World | [Inference](cosmos1/models/diffusion/README.md) |
|12| [Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8](https://huggingface.co/nvidia/Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8) | Diffusion decoder for enhancing Cosmos 1.0 autoregressive WFMs' outputs | [Inference](cosmos1/models/autoregressive/README.md) |
|13| [Cosmos-1.0-Guardrail](https://huggingface.co/nvidia/Cosmos-1.0-Guardrail) | Guardrail contains pre-Guard and post-Guard for safe use | Embedded in model inference scripts | | {
"source": "NVIDIA/Cosmos",
"title": "release_notes/v1p0.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/release_notes/v1p0.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 3886
} |
# Cosmos Post-training
In the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai), we discuss several post-training examples of Cosmos pre-trained World Foundation Models (WFMs) for various Physical AI tasks, including
- General Post-Training: Fine-tune the WFM to generate a target distribution of videos based on the custom dataset. The target distribution could include a specific camera spec or a specific domain such as a factory.
- Instruction Control: Post-trains models for robotic manipulation to predict videos based on textual instructions, enabling robots to visually simulate tasks like folding clothes or picking up objects.
- Action Control: Post-trains models for robotic manipulation to predict the next visual frame based on action vectors, simulating robotic tasks like object handling or movement planning.
- Camera Control: Adds camera pose conditioning to generate 3D-consistent video simulations from single images, enabling joystick-like navigation in virtual environments.
- Multi-View Generation: Post-trains models for autonomous vehicles to generate synchronized multi-view videos from text prompts, simulating driving scenarios with multiple camera perspectives.
- Multi-View Generation with Vehicle Trajectory Control: Extends multi-view generation by incorporating trajectory inputs, enabling precise simulation of driving environments for autonomous vehicles, adhering to specified paths.
Except for the instruction control where the WFM is post-trained on a dataset of instruction-video pairs, all other cases require minor modifications of the network architectures. Post-training tasks will be supported by NeMo Framework. In this initial release, we provide post-training scripts for the general post-training of both diffusion and autorgressive WFMs. Scripts of the other post-training tasks will be provided in a future release.
## Post-training Support Matrix
| Post-training Task | Diffusion WFM | Autoregressive WFM |
|---------------------|---------------|--------------------|
| General post-training | [Supported](../models/diffusion/nemo/post_training/README.md) | [Supported](../models/autoregressive/nemo/post_training/README.md) |
| Instruction control | Coming soon | Coming soon |
| Action control | Coming soon | Coming soon |
| Camera control | Coming soon | Coming soon |
| Multi-view generation | Coming soon | Coming soon |
| Multi-view generation with vehicle trajectory control | Coming soon | Coming soon | | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/POST_TRAINING.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/POST_TRAINING.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 2533
} |
# Cosmos Autoregressive-based World Foundation Models
## Table of Contents
- [Getting Started](#getting-started)
- [Set Up Docker Environment](#set-up-docker-environment)
- [Download Checkpoints](#download-checkpoints)
- [Usage](#usage)
- [Model Types](#model-types)
- [Single and Batch Generation](#single-and-batch-generation)
- [Sample Commands](#sample-commands)
- [Base Models (4B/12B)](#base-basepy-4b-and-12b)
- [Video2World Models (5B/13B)](#video2world-video2worldpy-5b-and-13b)
- [Arguments](#arguments)
- [Common Parameters](#common-parameters)
- [Base Specific Parameters](#base-specific-parameters)
- [Video2World Specific Parameters](#video2world-specific-parameters)
- [Safety Features](#safety-features)
This page details the steps for using the Cosmos autoregressive-based world foundation models.
## Getting Started
### Set Up Docker Environment
Follow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker.
### Download Checkpoints
1. Generate a [Hugging Face](https://huggingface.co/settings/tokens) access token. Set the access token to 'Read' permission (default is 'Fine-grained').
2. Log in to Hugging Face with the access token:
```bash
huggingface-cli login
```
3. Download the Cosmos model weights from [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6):
```bash
PYTHONPATH=$(pwd) python cosmos1/scripts/download_autoregressive.py --model_sizes 4B 5B 12B 13B
```
4. The downloaded files should be in the following structure:
```
checkpoints/
├── Cosmos-1.0-Autoregressive-4B
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Autoregressive-5B-Video2World
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Autoregressive-12B
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Autoregressive-13B-Video2World
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Tokenizer-CV8x8x8
│ ├── decoder.jit
│ ├── encoder.jit
│ └── mean_std.pt
├── Cosmos-1.0-Tokenizer-DV8x16x16
│ ├── decoder.jit
│ └── encoder.jit
├── Cosmos-1.0-Diffusion-7B-Decoder-DV8x16x16ToCV8x8x8
│ ├── aux_vars.pt
│ └── model.pt
└── Cosmos-1.0-Guardrail
├── aegis/
├── blocklist/
├── face_blur_filter/
└── video_content_safety_filter/
```
## Usage
### Model Types
There are two model types available for autoregressive world generation:
1. **Base**: Supports world generation from image/video input
* Models: `Cosmos-1.0-Autoregressive-4B` and `Cosmos-1.0-Autoregressive-12B`
* Inference script: [base.py](/cosmos1/models/autoregressive/inference/base.py)
2. **Video2World**: Supports world generation from image/video input and text input
* Models: `Cosmos-1.0-Autoregressive-5B-Video2World` and `Cosmos-1.0-Autoregressive-13B-Video2World`
* Inference script: [video2world.py](/cosmos1/models/autoregressive/inference/video2world.py)
Our models now support video extension up to 33 frames. Starting from either a single image or a 9-frame video input, they can generate the remaining frames to reach the 33-frame length (generating 32 or 24 frames, respectively).
We have evaluated all eight possible configurations (4 models × 2 vision input types: image or video) using 100 test videos on physical AI topics. Below are the failure rates for each configuration:
| Model | Image input | Video input (9 frames) |
|:------------------------------------------|:--------------:|:-------------------------:|
| Cosmos-1.0-Autoregressive-4B | 15% | 1% |
| Cosmos-1.0-Autoregressive-5B-Video2World | 7% | 2% |
| Cosmos-1.0-Autoregressive-12B | 2% | 1% |
| Cosmos-1.0-Autoregressive-13B-Video2World | 3% | 0% |
We define failure cases as videos with severe distortions, such as:
* Sudden appearance of large unexpected objects
* Video degrading to a single solid color
Note that the following are not considered failures in our analysis:
* Static video frames
* Minor object distortions or artifacts
### Single and Batch Generation
We support both single and batch video generation.
For generating a single video, `base` mode requires the input argument `--input_image_or_video_path` (image/video input), while `video2world` mode requires both `--input_image_or_video_path` (image/video input) and `--prompt` (text input).
Note that our model only works with 1024x640 resolution videos. If the input image/video is not in this resolution, it will be resized and cropped.
For generating a batch of videos, both `base` and `video2world` require `--batch_input_path` (path to a JSONL file). For `base`, the JSONL file should contain one visual input per line in the following format, where each line must contain a "visual_input" field:
```json
{"visual_input": "path/to/video1.mp4"}
{"visual_input": "path/to/video2.mp4"}
```
For `video2world`, each line in the JSONL file must contain both "prompt" and "visual_input" fields:
```json
{"prompt": "prompt1", "visual_input": "path/to/video1.mp4"}
{"prompt": "prompt2", "visual_input": "path/to/video2.mp4"}
```
### Sample Commands
There are two main demo scripts for autoregressive world generation: `base.py` and `video2world.py`. Below you will find sample commands for single and batch generation, as well as commands for running with low-memory GPUs using model offloading. We also provide a memory usage table comparing different offloading strategies to help with configuration.
#### Base (base.py): 4B and 12B
Generates world from image/video input.
The `input_type` argument can be either `video` or `image`. We have tuned the sampling parameters `top_p` and `temperature` to achieve the best performance. Please use the provided values in the command examples.
Note that the command examples below all use video input. If you want to use image input, please change the `input_type` to `image`.
##### Single Generation
```bash
# Example using 4B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \
--input_type=video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--video_save_name=Cosmos-1.0-Autoregressive-4B \
--ar_model_dir=Cosmos-1.0-Autoregressive-4B \
--top_p=0.8 \
--temperature=1.0
# Example for low-memory GPUs using 4B model with model offloading
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \
--input_type=video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--video_save_name=Cosmos-1.0-Autoregressive-4B \
--ar_model_dir=Cosmos-1.0-Autoregressive-4B \
--top_p=0.8 \
--temperature=1.0 \
--offload_guardrail_models \
--offload_diffusion_decoder \
--offload_ar_model \
--offload_tokenizer
# Example using 12B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \
--input_type=video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--video_save_name=Cosmos-1.0-Autoregressive-12B \
--ar_model_dir=Cosmos-1.0-Autoregressive-12B \
--top_p=0.9 \
--temperature=1.0
# Example for low-memory GPUs using 12B model with model offloading
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \
--input_type=video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--video_save_name=Cosmos-1.0-Autoregressive-12B \
--ar_model_dir=Cosmos-1.0-Autoregressive-12B \
--top_p=0.9 \
--temperature=1.0 \
--offload_guardrail_models \
--offload_diffusion_decoder \
--offload_ar_model \
--offload_tokenizer
```
##### Batch Generation
```bash
# Example using 4B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \
--input_type=video \
--batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/base.jsonl \
--video_save_folder=outputs/Cosmos-1.0-Autoregressive-4B \
--ar_model_dir=Cosmos-1.0-Autoregressive-4B \
--top_p=0.8 \
--temperature=1.0
# Example using 12B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/base.py \
--input_type=video \
--batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/base.jsonl \
--video_save_folder=outputs/Cosmos-1.0-Autoregressive-12B \
--ar_model_dir=Cosmos-1.0-Autoregressive-12B \
--top_p=0.9 \
--temperature=1.0
```
##### Example Output
Here is an example output video generated using base.py with image input, using `Cosmos-1.0-Autoregressive-12B`:
<video src="https://github.com/user-attachments/assets/634403a5-1873-42d7-8dd0-eb7fb4ac8cf4">
Your browser does not support the video tag.
</video>
The input image used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.jpg`. The image is from [BDD dataset](http://bdd-data.berkeley.edu/).
Here is an example output video generated using base.py with 9-frame video input, using `Cosmos-1.0-Autoregressive-12B`:
<video src="https://github.com/user-attachments/assets/1a3ff099-87d7-41e8-b149-a25cfcd4f40b">
Your browser does not support the video tag.
</video>
The input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`.
##### Inference Time and GPU Memory Usage
These numbers may vary based on system specifications and are provided for reference only.
| Offloading Strategy | Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B |
|-------------|---------|---------|
| No offloading | 31.3 GB | 47.5 GB |
| Guardrails | 28.9 GB | 45.2 GB |
| Guardrails & Diffusion decoder | 28.5 GB | 43.1 GB |
| Guardrails & Diffusion decoder & Tokenizer | 27.3 GB | 42.9 GB |
| Guardrails & Diffusion decoder & Tokenizer & AR model | 18.7 GB | 27.4 GB |
End-to-end inference runtime on one H100 without offloading and after model initialization:
| Cosmos-1.0-Autoregressive-4B | Cosmos-1.0-Autoregressive-12B |
|---------|---------|
| ~62 seconds | ~119 seconds |
#### Video2World (video2world.py): 5B and 13B
Generates world from image/video and text input.
The `input_type` argument can be either `text_and_video` or `text_and_image`. We have tuned the sampling parameters `top_p` and `temperature` to achieve the best performance. Please use the provided values in the command examples.
Note that the command examples below all use video input. If you want to use image input, please change the `input_type` to `text_and_image`.
##### Single Generation
```bash
# Example using 5B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \
--input_type=text_and_video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \
--video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \
--ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \
--top_p=0.7 \
--temperature=1.0
# Example for low-memory GPUs using 5B model with model offloading
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \
--input_type=text_and_video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \
--video_save_name=Cosmos-1.0-Autoregressive-5B-Video2World \
--ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \
--top_p=0.7 \
--temperature=1.0 \
--offload_guardrail_models \
--offload_diffusion_decoder \
--offload_ar_model \
--offload_tokenizer \
--offload_text_encoder_model
# Example using 13B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \
--input_type=text_and_video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \
--video_save_name=Cosmos-1.0-Autoregressive-13B-Video2World \
--ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \
--top_p=0.8 \
--temperature=1.0 \
--offload_guardrail_models
# Example for low-memory GPUs using 13B model with model offloading
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \
--input_type=text_and_video \
--input_image_or_video_path=cosmos1/models/autoregressive/assets/v1p0/input.mp4 \
--prompt="A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \
--video_save_name=Cosmos-1.0-Autoregressive-13B-Video2World \
--ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \
--top_p=0.8 \
--temperature=1.0 \
--offload_guardrail_models \
--offload_diffusion_decoder \
--offload_ar_model \
--offload_tokenizer \
--offload_text_encoder_model
```
##### Batch Generation
```bash
# Example using 5B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \
--input_type=text_and_video \
--batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/video2world.jsonl \
--video_save_folder=outputs/Cosmos-1.0-Autoregressive-5B-Video2World \
--ar_model_dir=Cosmos-1.0-Autoregressive-5B-Video2World \
--top_p=0.7 \
--temperature=1.0
# Example using 13B model
CUDA_VISIBLE_DEVICES=0 PYTHONPATH=$(pwd) python cosmos1/models/autoregressive/inference/video2world.py \
--input_type=text_and_video \
--batch_input_path=cosmos1/models/autoregressive/assets/v1p0/batch_inputs/video2world.jsonl \
--video_save_folder=outputs/Cosmos-1.0-Autoregressive-13B-Video2World \
--ar_model_dir=Cosmos-1.0-Autoregressive-13B-Video2World \
--top_p=0.8 \
--temperature=1.0 \
--offload_guardrail_models
```
##### Example Output
Here is an example output video generated using video2world.py with image input, using `Cosmos-1.0-Autoregressive-13B-Video2World`:
<video src="https://github.com/user-attachments/assets/869f3b81-fabd-462e-a545-c04cdd9c1d22">
Your browser does not support the video tag.
</video>
The input image used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.jpg`. The prompt for generating the video is:
```
A driving video captures a serene urban street scene on a sunny day. The camera is mounted on the dashboard of a moving vehicle, providing a first-person perspective as it travels down a two-lane road. The street is lined with parked cars on both sides, predominantly black and silver sedans and SUVs. The road is flanked by a mix of residential and commercial buildings, with a prominent red-brick building on the left side, featuring multiple windows and a flat roof. The sky is clear with a few scattered clouds, casting soft shadows on the street. Trees with lush green foliage line the right side of the road, providing a natural contrast to the urban environment. The camera remains steady, maintaining a consistent forward motion, suggesting a leisurely drive. Traffic is light, with a few vehicles moving in the opposite direction, including a black sedan and a yellow taxi. Street signs are visible, including a no-parking sign on the right. The overall atmosphere is calm and peaceful, with no pedestrians visible, emphasizing the focus on the drive and the surrounding urban landscape.
```
Here is an example output video generated using video2world.py with 9-frame video input, using `Cosmos-1.0-Autoregressive-13B-Video2World`:
<video src="https://github.com/user-attachments/assets/81840e1c-624b-4b01-9240-ab7db3722e58">
Your browser does not support the video tag.
</video>
The input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`. The prompt for generating the video is:
```
A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions.
```
##### Inference Time and GPU Memory Usage
These numbers may vary based on system specifications and are provided for reference only.
| Offloading Strategy | Cosmos-1.0-Autoregressive-5B-Video2World | Cosmos-1.0-Autoregressive-13B-Video2World |
|-------------|---------|---------|
| No offloading | 66.2 GB | > 80 GB |
| Guardrails | 58.7 GB | 76.6 GB |
| Guardrails & T5 encoder | 41.3 GB | 58.0 GB |
| Guardrails & T5 encoder & Diffusion decoder | 29.0 GB | 46.9 GB |
| Guardrails & T5 encoder & Diffusion decoder & Tokenizer | 28.8 GB | 46.7 GB |
| Guardrails & T5 encoder & Diffusion decoder & Tokenizer & AR model | 21.1 GB | 30.9 GB |
End-to-end inference runtime on one H100 with no offloading for 5B model and guardrail offloading for 13B, after model initialization:
| Cosmos-1.0-Autoregressive-5B-Video2World | Cosmos-1.0-Autoregressive-13B-Video2World |
|---------|---------|
| ~73 seconds | ~150 seconds |
### Arguments
#### Common Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--checkpoint_dir` | Directory containing model weights | "checkpoints" |
| `--video_save_name` | Output video filename for single video generation | "output" |
| `--video_save_folder` | Folder where all output videos are stored | "outputs/" |
| `--input_image_or_video_path` | Input image or video path. Required for single video generation | None |
| `--batch_input_path` | Folder containing input images or videos. Required for batch video generation | None |
| `--num_input_frames` | Number of input frames to use for Video2World prediction | 9 |
| `--temperature` | Temperature used while sampling | 1.0 (recommend using values in sample commands provided) |
| `--top_p` | Top-p value for top-p sampling | 0.8 (recommend using values in sample commands provided) |
| `--seed` | Random seed | 0 |
| `--disable_diffusion_decoder` | When set to True, use discrete tokenizer to decode discrete tokens to video. Otherwise, use diffusion decoder to decode video | False |
| `--offload_guardrail_models` | Offload guardrail models after inference, used for low-memory GPUs | False |
| `--offload_diffusion_decoder` | Offload diffusion decoder after inference, used for low-memory GPUs | False |
| `--offload_ar_model` | Offload AR model after inference, used for low-memory GPUs | False |
| `--offload_prompt_upsampler` | Offload prompt upsampler after inference, used for low-memory GPUs | False |
#### Base Specific Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--ar_model_dir` | Directory containing AR model weight | "Cosmos-1.0-Autoregressive-4B" |
| `--input_type` | Input type, either `video` or `image` | "video" |
#### Video2World Specific Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--ar_model_dir` | Directory containing AR model weight | "Cosmos-1.0-Autoregressive-4B" |
| `--input_type` | Input type, either `text_and_video` or `text_and_image` | "text_and_video" |
| `--prompt` | Text prompt for single video generation. Required for single video generation | None |
| `--input_prompts_path` | Path to JSONL file for batch video generation. Required for batch video generation | None |
| `--offload_text_encoder_model` | Offload text encoder after inference, used for low-memory GPUs | False |
### Safety Features
The model uses a built-in safety guardrail system that cannot be disabled. Generating human faces is not allowed and will be blurred by the guardrail.
For more information, check out the [Cosmos Guardrail Documentation](../guardrail/README.md). | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/autoregressive/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 20314
} |
# Cosmos Diffusion-based World Foundation Models
## Table of Contents
- [Getting Started](#getting-started)
- [Set Up Docker Environment](#set-up-docker-environment)
- [Download Checkpoints](#download-checkpoints)
- [Usage](#usage)
- [Model Types](#model-types)
- [Single and Batch Generation](#single-and-batch-generation)
- [Sample Commands](#sample-commands)
- [Text2World](#text2world-text2worldpy-7b-and-14b)
- [Video2World](#video2world-video2worldpy-7b-and-14b)
- [Arguments](#arguments)
- [Common Parameters](#common-parameters)
- [Text2World Specific Parameters](#text2world-specific-parameters)
- [Video2World Specific Parameters](#video2world-specific-parameters)
- [Safety Features](#safety-features)
- [Prompting Instructions](#prompting-instructions)
This page details the steps for using the Cosmos diffusion-based world foundation models.
## Getting Started
### Set Up Docker Environment
Follow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker.
### Download Checkpoints
1. Generate a [Hugging Face](https://huggingface.co/settings/tokens) access token. Set the access token to 'Read' permission (default is 'Fine-grained').
2. Log in to Hugging Face with the access token:
```bash
huggingface-cli login
```
3. Request access to Mistral AI's Pixtral-12B model by clicking on `Agree and access repository` on [Pixtral's Hugging Face model page](https://huggingface.co/mistralai/Pixtral-12B-2409). This step is required to use Pixtral 12B for the Video2World prompt upsampling task.
4. Download the Cosmos model weights from [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-6751e884dc10e013a0a0d8e6):
```bash
PYTHONPATH=$(pwd) python cosmos1/scripts/download_diffusion.py --model_sizes 7B 14B --model_types Text2World Video2World
```
5. The downloaded files should be in the following structure:
```
checkpoints/
├── Cosmos-1.0-Diffusion-7B-Text2World
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Diffusion-14B-Text2World
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Diffusion-7B-Video2World
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Diffusion-14B-Video2World
│ ├── model.pt
│ └── config.json
├── Cosmos-1.0-Tokenizer-CV8x8x8
│ ├── decoder.jit
│ ├── encoder.jit
│ └── mean_std.pt
├── Cosmos-1.0-Prompt-Upsampler-12B-Text2World
│ ├── model.pt
│ └── config.json
├── Pixtral-12B
│ ├── model.pt
│ ├── config.json
└── Cosmos-1.0-Guardrail
├── aegis/
├── blocklist/
├── face_blur_filter/
└── video_content_safety_filter/
```
## Usage
### Model Types
There are two model types available for diffusion world generation:
1. **Text2World**: Supports world generation from text input
* Models: `Cosmos-1.0-Diffusion-7B-Text2World` and `Cosmos-1.0-Diffusion-14B-Text2World`
* Inference script: [text2world.py](/cosmos1/models/diffusion/inference/text2world.py)
2. **Video2World**: Supports world generation from text and image/video input
* Models: `Cosmos-1.0-Diffusion-7B-Video2World` and `Cosmos-1.0-Diffusion-14B-Video2World`
* Inference script: [video2world.py](/cosmos1/models/diffusion/inference/video2world.py)
### Single and Batch Generation
We support both single and batch video generation.
For generating a single video, `Text2World` mode requires the input argument `--prompt` (text input). `Video2World` mode requires `--input_image_or_video_path` (image/video input). Additionally for Video2World, if the prompt upsampler is disabled, a text prompt must also be provided using the `--prompt` argument.
For generating a batch of videos, both `Text2World` and `Video2World` require `--batch_input_path` (path to a JSONL file). For `Text2World`, the JSONL file should contain one prompt per line in the following format, where each line must contain a "prompt" field:
```json
{"prompt": "prompt1"}
{"prompt": "prompt2"}
```
For `Video2World`, each line in the JSONL file must contain a "visual_input" field:
```json
{"visual_input": "path/to/video1.mp4"}
{"visual_input": "path/to/video2.mp4"}
```
If you disable the prompt upsampler by setting the `--disable_prompt_upsampler` flag, each line in the JSONL file will need to include both "prompt" and "visual_input" fields.
```json
{"prompt": "prompt1", "visual_input": "path/to/video1.mp4"}
{"prompt": "prompt2", "visual_input": "path/to/video2.mp4"}
```
### Sample Commands
There are two main demo scripts for diffusion world generation: `text2world.py` and `video2world.py`. Below you will find sample commands for single and batch generation, as well as commands for running with low-memory GPUs using model offloading. We also provide a memory usage table comparing different offloading strategies to help with configuration.
#### Text2World (text2world.py): 7B and 14B
Generates world from text input.
##### Single Generation
```bash
PROMPT="A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. \
The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. \
A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, \
suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. \
The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of \
field that keeps the focus on the robot while subtly blurring the background for a cinematic effect."
# Example using 7B model
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \
--prompt "$PROMPT" \
--offload_prompt_upsampler \
--video_save_name Cosmos-1.0-Diffusion-7B-Text2World
# Example using the 7B model on low-memory GPUs with model offloading. The speed is slower if using batch generation.
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \
--prompt "$PROMPT" \
--video_save_name Cosmos-1.0-Diffusion-7B-Text2World_memory_efficient \
--offload_tokenizer \
--offload_diffusion_transformer \
--offload_text_encoder_model \
--offload_prompt_upsampler \
--offload_guardrail_models
# Example using 14B model with prompt upsampler offloading (required on H100)
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-14B-Text2World \
--prompt "$PROMPT" \
--video_save_name Cosmos-1.0-Diffusion-14B-Text2World \
--offload_prompt_upsampler \
--offload_guardrail_models
```
##### Batch Generation
```bash
# Example using 7B model
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/text2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Text2World \
--batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/text2world.jsonl \
--video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Text2World \
--offload_prompt_upsampler
```
##### Example Output
Here is an example output video generated using text2world.py, using `Cosmos-1.0-Diffusion-7B-Text2World`:
<video src="https://github.com/user-attachments/assets/db7bebfe-5314-40a6-b045-4f6ce0a87f2a">
Your browser does not support the video tag.
</video>
The upsampled prompt used to generate the video is:
```
In a sprawling, meticulously organized warehouse, a sleek humanoid robot stands sentinel amidst towering shelves brimming with neatly stacked cardboard boxes. The robot's metallic body, adorned with intricate joints and a glowing blue chest light, radiates an aura of advanced technology, its design a harmonious blend of functionality and futuristic elegance. The camera captures this striking figure in a static, wide shot, emphasizing its poised stance against the backdrop of industrial wooden pallets. The lighting is bright and even, casting a warm glow that accentuates the robot's form, while the shallow depth of field subtly blurs the rows of boxes, creating a cinematic depth that draws the viewer into this high-tech realm. The absence of human presence amplifies the robot's solitary vigil, inviting contemplation of its purpose within this vast, organized expanse.
```
If you disable the prompt upsampler by using the `--disable_prompt_upsampler` flag, the output video will be generated using the original prompt:
<video src="https://github.com/user-attachments/assets/b373c692-9900-4e73-80c2-4016caa47a82">
Your browser does not support the video tag.
</video>
The original prompt is:
```
A sleek, humanoid robot stands in a vast warehouse filled with neatly stacked cardboard boxes on industrial shelves. The robot's metallic body gleams under the bright, even lighting, highlighting its futuristic design and intricate joints. A glowing blue light emanates from its chest, adding a touch of advanced technology. The background is dominated by rows of boxes, suggesting a highly organized storage system. The floor is lined with wooden pallets, enhancing the industrial setting. The camera remains static, capturing the robot's poised stance amidst the orderly environment, with a shallow depth of field that keeps the focus on the robot while subtly blurring the background for a cinematic effect.
```
Note that the robot face could be blurred sometimes by the guardrail in this example.
##### Inference Time and GPU Memory Usage
The numbers provided below may vary depending on system specs and are for reference only.
We report the maximum observed GPU memory usage during end-to-end inference. Additionally, we offer a series of model offloading strategies to help users manage GPU memory usage effectively.
For GPUs with limited memory (e.g., RTX 3090/4090 with 24 GB memory), we recommend fully offloading all models. For higher-end GPUs, users can select the most suitable offloading strategy considering the numbers provided below.
| Offloading Strategy | 7B Text2World | 14B Text2World |
|-------------|---------|---------|
| Offload prompt upsampler | 74.0 GB | > 80.0 GB |
| Offload prompt upsampler & guardrails | 57.1 GB | 70.5 GB |
| Offload prompt upsampler & guardrails & T5 encoder | 38.5 GB | 51.9 GB |
| Offload prompt upsampler & guardrails & T5 encoder & tokenizer | 38.3 GB | 51.7 GB |
| Offload prompt upsampler & guardrails & T5 encoder & tokenizer & diffusion model | 24.4 GB | 39.0 GB |
The table below presents the end-to-end inference runtime on a single H100 GPU, excluding model initialization time.
| 7B Text2World (offload prompt upsampler) | 14B Text2World (offload prompt upsampler, guardrails) |
|---------|---------|
| ~380 seconds | ~590 seconds |
#### Video2World (video2world.py): 7B and 14B
Generates world from text and image/video input.
##### Single Generation
Note that our prompt upsampler is enabled by default for Video2World, and it will generate the prompt from the input image/video. If the prompt upsampler is disabled, you can provide a prompt manually using the `--prompt` flag.
```bash
# Example using the 7B model
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \
--input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \
--num_input_frames 1 \
--video_save_name Cosmos-1.0-Diffusion-7B-Video2World \
--offload_prompt_upsampler
# Example using the 7B model on low-memory GPUs with model offloading. The speed is slower if using batch generation.
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \
--input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \
--num_input_frames 1 \
--video_save_name Cosmos-1.0-Diffusion-7B-Video2World_memory_efficient \
--offload_tokenizer \
--offload_diffusion_transformer \
--offload_text_encoder_model \
--offload_prompt_upsampler \
--offload_guardrail_models
# Example using 14B model with prompt upsampler offloading (required on H100)
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-14B-Video2World \
--input_image_or_video_path cosmos1/models/diffusion/assets/v1p0/video2world_input0.jpg \
--num_input_frames 1 \
--video_save_name Cosmos-1.0-Diffusion-14B-Video2World \
--offload_prompt_upsampler \
--offload_guardrail_models
```
##### Batch Generation
```bash
# Example using 7B model with 9 input frames
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \
--batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/video2world_ps.jsonl \
--video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Video2World \
--offload_prompt_upsampler \
--num_input_frames 9
# Example using 7B model with 9 input frames without prompt upsampler, using 'prompt' field in the JSONL file
PYTHONPATH=$(pwd) python cosmos1/models/diffusion/inference/video2world.py \
--checkpoint_dir checkpoints \
--diffusion_transformer_dir Cosmos-1.0-Diffusion-7B-Video2World \
--batch_input_path cosmos1/models/diffusion/assets/v1p0/batch_inputs/video2world_wo_ps.jsonl \
--video_save_folder outputs/Cosmos-1.0-Diffusion-7B-Video2World_wo_ps \
--disable_prompt_upsampler \
--num_input_frames 9
```
##### Example Output
Here is an example output video generated using video2world.py, using `Cosmos-1.0-Diffusion-14B-Video2World`:
<video src="https://github.com/user-attachments/assets/a840a338-5090-4f50-9790-42b7ede86ba6">
Your browser does not support the video tag.
</video>
The upsampled prompt (generated by the prompt upsampler) used to generate the video is:
```
The video depicts a long, straight highway stretching into the distance, flanked by metal guardrails. The road is divided into multiple lanes, with a few vehicles visible in the far distance. The surrounding landscape features dry, grassy fields on one side and rolling hills on the other. The sky is mostly clear with a few scattered clouds, suggesting a bright, sunny day.
```
##### Inference Time and GPU Memory Usage
The numbers provided below may vary depending on system specs and are for reference only.
| Offloading Strategy | 7B Video2World | 14B Video2World |
|----------------------------------------------------------------------------------|---------|---------|
| Offload prompt upsampler | 76.5 GB | > 80.0 GB |
| Offload prompt upsampler & guardrails | 59.9 GB | 73.3 GB |
| Offload prompt upsampler & guardrails & T5 encoder | 41.3 GB | 54.8 GB |
| Offload prompt upsampler & guardrails & T5 encoder & tokenizer | 41.1 GB | 54.5 GB |
| Offload prompt upsampler & guardrails & T5 encoder & tokenizer & diffusion model | 27.3 GB | 39.0 GB |
The following table shows the end-to-end inference runtime on a single H100 GPU, excluding model initialization time:
| 7B Video2World (offload prompt upsampler) | 14B Video2World (offload prompt upsampler, guardrails) |
|---------|---------|
| ~383 seconds | ~593 seconds |
### Arguments
#### Common Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--checkpoint_dir` | Directory containing model weights | "checkpoints" |
| `--tokenizer_dir` | Directory containing tokenizer weights | "Cosmos-1.0-Tokenizer-CV8x8x8" |
| `--video_save_name` | Output video filename for single video generation | "output" |
| `--video_save_folder` | Output directory for batch video generation | "outputs/" |
| `--prompt` | Text prompt for single video generation. Required for single video generation. | None |
| `--batch_input_path` | Path to JSONL file for batch video generation. Required for batch video generation. | None |
| `--negative_prompt` | Negative prompt for improved quality | "The video captures a series of frames showing ugly scenes..." |
| `--num_steps` | Number of diffusion sampling steps | 35 |
| `--guidance` | CFG guidance scale | 7.0 |
| `--num_video_frames` | Number of frames to generate | 121 |
| `--height` | Output video height | 704 |
| `--width` | Output video width | 1280 |
| `--fps` | Frames per second | 24 |
| `--seed` | Random seed | 1 |
| `--disable_prompt_upsampler` | Disable automatic prompt enhancement | False |
| `--offload_diffusion_transformer` | Offload DiT model after inference, used for low-memory GPUs | False |
| `--offload_tokenizer` | Offload VAE model after inference, used for low-memory GPUs | False |
| `--offload_text_encoder_model` | Offload text encoder after inference, used for low-memory GPUs | False |
| `--offload_prompt_upsampler` | Offload prompt upsampler after inference, used for low-memory GPUs | False |
| `--offload_guardrail_models` | Offload guardrail models after inference, used for low-memory GPUs | False |
Note: we support various aspect ratios, including 1:1 (960x960 for height and width), 4:3 (960x704), 3:4 (704x960), 16:9 (1280x704), and 9:16 (704x1280). The frame rate is also adjustable within a range of 12 to 40 fps. The current version of the model only supports 121 frames.
#### Text2World Specific Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--diffusion_transformer_dir` | Directory containing DiT weights | "Cosmos-1.0-Diffusion-7B-Text2World" |
| `--prompt_upsampler_dir` | Directory containing prompt upsampler weights | "Cosmos-1.0-Prompt-Upsampler-12B-Text2World" |
| `--word_limit_to_skip_upsampler` | Skip prompt upsampler for better robustness if the number of words in the prompt is greater than this value | 250 |
#### Video2World Specific Parameters
| Parameter | Description | Default |
|-----------|-------------|---------|
| `--diffusion_transformer_dir` | Directory containing DiT weights | "Cosmos-1.0-Diffusion-7B-Video2World" |
| `--prompt_upsampler_dir` | Directory containing prompt upsampler weights | "Pixtral-12B" |
| `--input_image_or_video_path` | Input video/image path for single video generation. Required for single video generation. | None |
| `--num_input_frames` | Number of video frames (1 or 9) | 1 |
### Safety Features
The model uses a built-in safety guardrail system that cannot be disabled. Generating human faces is not allowed and will be blurred by the guardrail.
For more information, check out the [Cosmos Guardrail Documentation](../guardrail/README.md).
### Prompting Instructions
The input prompt is the most important parameter under the user's control when interacting with the model. Providing rich and descriptive prompts can positively impact the output quality of the model, whereas short and poorly detailed prompts can lead to subpar video generation. Here are some recommendations to keep in mind when crafting text prompts for the model:
1. **Describe a single, captivating scene**: Focus on a single scene to prevent the model from generating videos with unnecessary shot changes.
2. **Limit camera control instructions**: The model doesn't handle prompts involving camera control well, as this feature is still under development.
3. **Prompt upsampler limitations**: The current version of the prompt upsampler may sometimes deviate from the original intent of your prompt, adding unwanted details. If this happens, you can disable the upsampler with the --disable_prompt_upsampler flag and edit your prompt manually. We recommend using prompts of around 120 words for optimal quality.
#### Cosmos-1.0-Prompt-Upsampler
The prompt upsampler automatically expands brief prompts into more detailed descriptions (Text2World) or generates detailed prompts based on input images (Video2World).
##### Text2World
When enabled (default), the upsampler will:
1. Take your input prompt
2. Process it through a finetuned Mistral model to generate a more detailed description
3. Use the expanded description for video generation
This can help generate better quality videos by providing more detailed context to the video generation model. To disable this feature, use the `--disable_prompt_upsampler` flag.
##### Video2World
When enabled (default), the upsampler will:
1. Take your input image or video
2. Process it through a Pixtral model to generate a detailed description
3. Use the generated description for video generation
Please note that the Video2World prompt upsampler does not consider any user-provided text prompt. To disable this feature, use the `--disable_prompt_upsampler` flag. | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/diffusion/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 21295
} |
# Cosmos Guardrail
This page outlines a set of tools to ensure content safety in Cosmos. For implementation details, please consult the [Cosmos paper](https://research.nvidia.com/publication/2025-01_cosmos-world-foundation-model-platform-physical-ai).
## Overview
Our guardrail system consists of two stages: pre-Guard and post-Guard.
Cosmos pre-Guard models are applied to text input, including input prompts and upsampled prompts.
* Blocklist: a keyword list checker for detecting harmful keywords
* Aegis: an LLM-based approach for blocking harmful prompts
Cosmos post-Guard models are applied to video frames generated by Cosmos models.
* Video Content Safety Filter: a classifier trained to distinguish between safe and unsafe video frames
* Face Blur Filter: a face detection and blurring module
## Usage
Cosmos Guardrail models are integrated into the diffusion and autoregressive world generation pipelines in this repo. Check out the [Cosmos Diffusion Documentation](../diffusion/README.md) and [Cosmos Autoregressive Documentation](../autoregressive/README.md) to download the Cosmos Guardrail checkpoints and run the end-to-end demo scripts with our Guardrail models. | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/guardrail/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/guardrail/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 1187
} |
<!-- # SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. -->
# Cosmos Tokenizer: A suite of image and video neural tokenizers.
### [Website](https://research.nvidia.com/labs/dir/cosmos-tokenizer) | [Paper](https://arxiv.org/abs/2501.03575) | [NVIDIA Cosmos](https://www.nvidia.com/en-us/ai/cosmos/) | [NVIDIA Blog](https://developer.nvidia.com/blog/state-of-the-art-multimodal-generative-ai-model-development-with-nvidia-nemo/) | [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6) | [YouTube](https://youtu.be/Soy_myOfWIU) | [TokenBench](https://github.com/NVlabs/TokenBench)
We present [**NVIDIA Cosmos Tokenizer**](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer), a suite of image and video tokenizers that advances the state-of-the-art in visual tokenization, paving the way for scalable, robust and efficient development of large auto-regressive transformers (such as LLMs) or diffusion generators. Cosmos Tokenizer is the core component of the [**NVIDIA Cosmos**](https://github.com/NVIDIA/Cosmos), a developer-first video foundation model platform designed to help Physical AI developers build their Physical AI systems better and faster. Please check out our [demo video](https://youtu.be/Soy_myOfWIU).
| | Continuous ( C ) | Discrete ( D ) |
| ------------------|---------------------|---------------------|
| **Images ( I )** | Cosmos-Tokenizer-CI | Cosmos-Tokenizer-DI |
| **Videos ( V )** | Cosmos-Tokenizer-CV | Cosmos-Tokenizer-DV |
<video src="https://github.com/user-attachments/assets/a40b0cc0-17dc-42e9-a97c-fe1c8bb03548" controls poster="https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/assets/cosmos-tokenizer.jpg?raw=true">
Your browser does not support the video tag.
</video>
Given an image or video, Cosmos Tokenizer outputs either continuous latents or discrete tokens. Cosmos Tokenizer achieves spatial compression rates of 8x or 16x and temporal compression factors of 4x or 8x, resulting in a total compression factor of up to 2048x (=8x16x16).
Cosmos Tokenizer delivers 8x more total compression than state-of-the-art (SOTA) methods, while simultaneously maintaining higher image quality and running up to 12x faster than the best available SOTA tokenizers.

## Web Demo
* Image Tokenization [](https://colab.research.google.com/github/nvidia/Cosmos/blob/main/cosmos1/models/tokenizer/notebook/Image_Tokenization.ipynb)
* Video Tokenization [](https://colab.research.google.com/github/nvidia/Cosmos/blob/main/cosmos1/models/tokenizer/notebook/Video_Tokenization.ipynb)
## Licenses
- **Models**: The models are licensed under [NVIDIA Open Model License](https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf). Under the NVIDIA Open Model License, NVIDIA confirms:
- Models are commercially usable.
- You are free to create and distribute Derivative Models.
- NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
- **GitHub Code**: This repository is licensed under the [Apache 2.0
license](https://github.com/NVIDIA/Cosmos/blob/main/LICENSE).
## Installation
Follow our [Installation Guide](../../../INSTALL.md) to set up the Docker environment. All commands on this page should be run inside Docker.
## Download Pre-trained Checkpoints from Hugging Face
We host 12 Cosmos-Tokenizer models on [Hugging Face](https://huggingface.co/collections/nvidia/cosmos-tokenizer-672b93023add81b66a8ff8e6), with the following model names. You can use this snippet to download:
```python
from huggingface_hub import login, snapshot_download
import os
login(token="<YOUR-HF-TOKEN>", add_to_git_credential=True)
model_names = [
"Cosmos-0.1-Tokenizer-CI8x8",
"Cosmos-0.1-Tokenizer-CI16x16",
"Cosmos-0.1-Tokenizer-CV4x8x8",
"Cosmos-0.1-Tokenizer-CV8x8x8",
"Cosmos-0.1-Tokenizer-CV8x16x16",
"Cosmos-0.1-Tokenizer-DI8x8",
"Cosmos-0.1-Tokenizer-DI16x16",
"Cosmos-0.1-Tokenizer-DV4x8x8",
"Cosmos-0.1-Tokenizer-DV8x8x8",
"Cosmos-0.1-Tokenizer-DV8x16x16",
"Cosmos-1.0-Tokenizer-CV8x8x8",
"Cosmos-1.0-Tokenizer-DV8x16x16",
]
for model_name in model_names:
hf_repo = "nvidia/" + model_name
local_dir = "checkpoints/" + model_name
print(f"downloading {model_name}...")
snapshot_download(repo_id=hf_repo, local_dir=local_dir)
```
Under the checkpoint repository `checkpoints/{model_name}`, we provide the encoder, decoder and the full autoencoder JIT models.
```bash
├── Cosmos-1.0-Tokenizer-CV8x8x8/
│ ├── encoder.jit
│ ├── decoder.jit
│ ├── autoencoder.jit
```
## Running the codes
You can use the following example commands to encode and decode images or videos. <br />
For each, the same command works for both continuous and discrete tokenization. Simply provide the proper JIT-compiled ckpt to `checkpoint_enc`, `checkpoint_dec`, or the full autoencoder ckpt to `checkpoint`.
### Encoding into Continuous Latent Space
```python
import torch
from cosmos1.models.tokenizer.inference.video_lib import CausalVideoTokenizer
model_name = "Cosmos-0.1-Tokenizer-CV4x8x8"
input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, T, H, W]
encoder = CausalVideoTokenizer(checkpoint_enc=f'checkpoints/{model_name}/encoder.jit')
(latent,) = encoder.encode(input_tensor)
torch.testing.assert_close(latent.shape, (1, 16, 3, 64, 64))
# The input tensor can be reconstructed by the decoder as:
decoder = CausalVideoTokenizer(checkpoint_dec=f'checkpoints/{model_name}/decoder.jit')
reconstructed_tensor = decoder.decode(latent)
torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape)
```
The `latent` will have the shape `(1, 16, 3, 64, 64)`, where the first of the three latents represents the first frame, and C=16 is the number of channels of the latent.
### Encoding into Discrete Tokens
```python
import torch
from cosmos1.models.tokenizer.inference.video_lib import CausalVideoTokenizer
model_name = "Cosmos-0.1-Tokenizer-DV4x8x8"
input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16) # [B, C, T, H, W]
encoder = CausalVideoTokenizer(checkpoint_enc=f'checkpoints/{model_name}/encoder.jit')
(indices, codes) = encoder.encode(input_tensor)
torch.testing.assert_close(indices.shape, (1, 3, 64, 64))
torch.testing.assert_close(codes.shape, (1, 6, 3, 64, 64))
# The input tensor can be reconstructed by the decoder as:
decoder = CausalVideoTokenizer(checkpoint_dec=f'checkpoints/{model_name}/decoder.jit')
reconstructed_tensor = decoder.decode(indices)
torch.testing.assert_close(reconstructed_tensor.shape, input_tensor.shape)
```
The `indices` will have the shape `(1, 3, 64, 64)` and contain integral values in the range `[1..64K]`, where the first of the three integral maps represents the first frame.
The `codes` will contain the pre-quantization continuous latent with shape `(1, 6, 3, 64, 64)`, where C=6 represents the number of FSQ levels.
## Torchscript (PyTorch JIT) Inference APIs
The following instructions run the various tokenizer on the example image and video provided in `cosmos1/models/tokenizer/test_data/`.
- Autoencoding images. Accepts an input image, and outputs a reconstruction of the image obtained by decoding the encoded latents.
```bash
# Autoencoding images using `Cosmos-CI` with a compression rate of 8x8.
model_name="Cosmos-0.1-Tokenizer-CI8x8"
python3 -m cosmos1.models.tokenizer.inference.image_cli \
--image_pattern 'cosmos1/models/tokenizer/test_data/image.png' \
--checkpoint_enc checkpoints/${model_name}/encoder.jit \
--checkpoint_dec checkpoints/${model_name}/decoder.jit
```
If `--output_dir` is not specified, you can find the reconstructed image at `cosmos1/models/tokenizer/test_data/reconstructions/image.png`.
- Autoencoding videos. Accepts an input video, and outputs a reconstruction of the video obtained by decoding the encoded latents.
```bash
# Autoencoding videos using `Cosmos-DV` with a compression rate of 4x8x8.
model_name="Cosmos-0.1-Tokenizer-DV4x8x8"
python3 -m cosmos1.models.tokenizer.inference.video_cli \
--video_pattern 'cosmos1/models/tokenizer/test_data/video.mp4' \
--checkpoint_enc checkpoints/${model_name}/encoder.jit \
--checkpoint_dec checkpoints/${model_name}/decoder.jit
```
If `--output_dir` is not specified, then you can find the reconstructed video at `cosmos1/models/tokenizer/test_data/reconstructions/video.mp4`.
## PyTorch Inference APIs
To run the tokenizers in native PyTorch, append your commands with `--mode=torch`. <br />
In PyTorch mode, the model is constructed from the native network definition scripts, which requires providing additional arguments to configure the model for instantiation.
For example, to instantiate a `Cosmos-DI` with a spatial compression factor of 8, append the following command line arguments:
- `--mode=torch`
- `--tokenizer_type=DI`
- `--spatial_compression=8`
Note that the `--checkpoint_enc`, `--checkpoint_dec`, and `--checkpoint` should still refer to JIT files. <br />
The necessary `state_dict`s will be extracted from the loaded JIT models to initialize the weights of the constructed native PyTorch model.
```bash
# Autoencoding images using `Cosmos-DI` with a compression rate of 8x8.
model_name="Cosmos-0.1-Tokenizer-DI8x8"
python3 -m cosmos1.models.tokenizer.inference.image_cli \
--image_pattern 'cosmos1/models/tokenizer/test_data/*.png' \
--mode=torch \
--tokenizer_type=DI \
--spatial_compression=8 \
--checkpoint_enc checkpoints/${model_name}/encoder.jit \
--checkpoint_dec checkpoints/${model_name}/decoder.jit
```
To instantiate a `Cosmos-CV` with a temporal factor of 8 and a spatial compression factor of 8, append the following command line arguments:
- `--mode=torch`
- `--tokenizer_type=CV`
- `--temporal_compression=8`
- `--spatial_compression=8`
```bash
# Autoencoding videos using `Cosmos-CV` with a compression rate of 8x8x8.
model_name="Cosmos-1.0-Tokenizer-CV8x8x8"
python3 -m cosmos1.models.tokenizer.inference.video_cli \
--video_pattern 'cosmos1/models/tokenizer/test_data/*.mp4' \
--mode=torch \
--tokenizer_type=CV \
--temporal_compression=8 \
--spatial_compression=8 \
--checkpoint_enc checkpoints/${model_name}/encoder.jit \
--checkpoint_dec checkpoints/${model_name}/decoder.jit
```
## Inference & dataset tokenization with NeMo (JIT/TensorRT)
TensorRT inference is coming soon, which will be available in [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers)
### JIT inference
Please install NeMo from the GitHub `main` branch following the instructions [here](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#pip-from-a-source-branch).
Run the following code to tokenize the video:
```python
import torch
from nemo.collections.common.video_tokenizers.cosmos_vision_tokenizer import CausalVideoTokenizer
model_name = "Cosmos-0.1-Tokenizer-CV4x8x8"
model = CausalVideoTokenizer.from_pretrained(model_name)
input_tensor = torch.randn(1, 3, 9, 512, 512).to('cuda').to(torch.bfloat16)
(latent, ) = model.encode(input_tensor)
```
### dataset tokenization and multimodal model training
Please see the [Cosmos Tokenizer README within the NeMo repository](https://github.com/NVIDIA/NeMo/tree/main/nemo/collections/common/video_tokenizers) for additional examples to create multimodal training datasets with the Cosmos Tokenizer.
## Evaluation
Quantitative comparision of our tokenizer and previous tokenizers on DAVIS (Perazzi et al., 2016) dataset. Cosmos Tokenizer achieves state-of-the-art results. Even at higer compression rates (8x8x8 and 8x16x16), Cosmos Tokenizer outperforms previous methods, demonstrating excellent compression-quality trade-off.

## Performance
Comparision of parameter counts and average encoding and decoding times per image or per video frame on a single A100 80GB GPU. Cosmos Tokenizer achieves 2x to 12x faster speeds than previous methods while maintaining smallest model sizes, demonstrating high tokenization efficiency.

## [TokenBench](https://github.com/NVlabs/TokenBench)
TokenBench is a comprehensive benchmark that we have curated to standardize the evaluation of [Cosmos-Tokenizer](https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer). It covers a wide variety of domains including robotic manipulation, driving, egocentric, and web videos. It consists of high-resolution, long-duration videos, and is designed to benchmark video tokenizers. We have made TokenBench publicly available at [github.com/NVlabs/TokenBench](https://github.com/NVlabs/TokenBench).
## Core Contributors
Fitsum Reda, Jinwei Gu, Xian Liu, Songwei Ge, Ting-Chun Wang, Haoxiang Wang, Ming-Yu Liu
## Citation
If you find Cosmos Tokenizer useful in your works, please acknowledge it
appropriately by citing:
```
@article{agarwal2025cosmos,
title={Cosmos World Foundation Model Platform for Physical AI},
author={NVIDIA et. al.},
journal={arXiv preprint arXiv:2501.03575},
year={2025}
}
```
## Acknowledgments
We would like to acknowledge the following projects where parts of the codes in the [cosmos1/models/tokenizer/modules](cosmos1/models/tokenizer/modules) folder is derived from:
- [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
- [lucidrains/magvit2-pytorch](https://github.com/lucidrains/magvit2-pytorch)
- [lucidrains/vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch)
- [CompVis/taming-transformers](https://github.com/CompVis/taming-transformers) | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/tokenizer/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 14604
} |
# Cosmos Tokenizer: NeMo Framework Finetuning User Guide
Post-train the Cosmos Tokenizer using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) to more accurately model previously unseen scenarios in your customer data, particularly for self-driving applications. By adapting the Cosmos Tokenizer to the specific characteristics and complexities of your in-house video content, you equip it to handle unique visual and temporal patterns that may have been missed during its initial pre-training. This enhanced modeling capability is essential for downstream diffusion models, which rely on the Tokenizer’s output to generate realistic physical scenes—ultimately boosting the performance and safety of your self-driving car systems.
## Model Support Matrix
The NeMo Framework currently supports the following Cosmos Tokenizer models. Review the available models for post-training.
| Model Name | Model Ckpt |
|-------------------------|----------------------------|
| Cosmos-1.0-Tokenizer-CV8x8x8 | [HF Download](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-CV8x8x8) |
| Cosmos-1.0-Tokenizer-DV8x16x16 | [HF Download](https://huggingface.co/nvidia/Cosmos-1.0-Tokenizer-DV8x16x16) |
For optimal performance, we recommend utilizing GPUs such as the H100-80GB or A100-80GB.
Note: Have a use case that would benefit from an alternative tokenizer? We'd love to hear from you. You can submit a request via a GitHub issue.
## Post-Training Support Matrix
Cosmos Tokenizer can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:
| Post-training Task | Support Status |
|-------------------------|--------------------|
| General post-training and validation | **Supported** |
## Prerequisites
### 1. Review General Requirements
- System Configuration
- **NVIDIA GPU and driver**: Ensure you have access to the 80G H100 or A100 to run the model(s).
- **Containerization Platform**: We recommend using NVIDIA [NeMo Docker](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo/tags) Runtime (alternatively, you may use NVIDIA enroot).
- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.
- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.
### 2. Clone the Cosmos Repository
```bash
git clone [email protected]:NVIDIA/Cosmos.git
```
### 3. Start the Container
The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Tokenizer models.
Run the following command to download and start the container:
```bash
docker run --ipc=host -it --gpus=all \
-v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \
nvcr.io/nvidia/nemo:24.12.01 bash
```
### 4. Download Checkpoints
Follow the links provided in the Model Support Matrix to download the Cosmos Tokenizer checkpoints from Hugging Face. Detailed instructions for the download process are available on the Hugging Face page.
## Post-train
Post-training a Cosmos Tokenizer enables you to train the model to compress videos that are more specific to your Physical AI use case.
There are 3 steps to post-trainig: preparing a dataset, preprocessing the data, and post-training the model.
### 1. Prepare a Dataset
The first step is to prepare your dataset. Organize your data into a folder containing multiple video tars, each contains MP4 format videos (preferably at least 720p resolution). The recommended folder structure is as follows:
- `000000.tar`
- `1.mp4`
- `2.mp4`
- `000001.tar`
- `3.mp4`
- `4.mp4`
Here, 000000.tar and 000001.tar represent separate shards, and you may include additional shards as needed.
Next we need to index the webdataset with [energon](<https://github.com/NVIDIA/Megatron-Energon>). Navigate to the dataset directory and run the following command:
```bash
energon prepare . --num-workers 8 --shuffle-tars
```
Interactively select dataset type `ImageWebdataset` and specify the type `mp4`. Below is an example of the interactive setup:
```
Found 2925 tar files in total. The first and last ones are:
- 000000.tar
- 002924.tar
If you want to exclude some of them, cancel with ctrl+c and specify an exclude filter in the command line.
Please enter a desired train/val/test split like "0.5, 0.2, 0.3" or "8,1,1": 99,1,0
Indexing shards [####################################] 2925/2925
Sample 0, keys:
- mp4
Sample 1, keys:
- mp4
Found the following part types in the dataset: mp4
Do you want to create a dataset.yaml interactively? [Y/n]:
The following dataset classes are available:
0. CaptioningWebdataset
1. CrudeWebdataset
2. ImageClassificationWebdataset
3. ImageWebdataset
4. InterleavedWebdataset
5. MultiChoiceVQAWebdataset
6. OCRWebdataset
7. SimilarityInterleavedWebdataset
8. TextWebdataset
9. VQAOCRWebdataset
10. VQAWebdataset
11. VidQAWebdataset
Please enter a number to choose a class: 3
The dataset you selected uses the following sample type:
@dataclass
class ImageSample(Sample):
"""Sample type for an image, e.g. for image reconstruction."""
#: The input image tensor in the shape (C, H, W)
image: torch.Tensor
Do you want to set a simple field_map[Y] (or write your own sample_loader [n])? [Y/n]:
For each field, please specify the corresponding name in the WebDataset.
Available types in WebDataset: mp4
Leave empty for skipping optional field
You may also access json fields e.g. by setting the field to: json[field][field]
You may also specify alternative fields e.g. by setting to: jpg,png
Please enter the field_map for ImageWebdataset:
Please enter a webdataset field name for 'image' (<class 'torch.Tensor'>):
That type doesn't exist in the WebDataset. Please try again.
Please enter a webdataset field name for 'image' (<class 'torch.Tensor'>): mp4
Done
```
### 3. Post-train the Model
The third step is to post-train the Cosmos tokenizer using the NeMo Framework.
#### Run the Post-training Script
Complete the following steps to post-train the Cosmos tokenizer Cosmos-1.0-Tokenizer-CV8x8x8.
1. Install the dependencies under cosmos1/models/tokenizer/nemo:
```bash
pip install megatron-energon==4.0.0 pyav
pip install git+https://github.com/NVIDIA/NeMo-Run.git
pip install moviepy==1.0.3 imageio
# switch to NeMo branch supporting tokenizer post-training
cd /opt/NeMo && git fetch origin cosmos_tokenizer && git checkout cosmos_tokenizer
```
2. Run the following command to post-train Cosmos-1.0-Tokenizer-CV8x8x8:
```bash
export CKPT_PTH="<path/to/your/HF/checkpoints/folder>"
export DATA="<path/to/your/data>"
# Optionally, you can monitor training progress with Weights and Biases (wandb).
export WANDB_API_KEY="</your/wandb/api/key>"
export WANDB_PROJECT_NAME="cosmos-diffusion-nemo-post-training"
export WANDB_RUN_ID="cosmos_diffusion_7b_text2world"
torchrun --nproc-per-node 8 cosmos1/models/tokenizer/nemo/train_tokenizer.py --yes \
data.path=$DATA \
model.jit_ckpt_pth=$CKPT_PTH \
model.model="Cosmos-1.0-Tokenizer-CV8x8x8"
```
##### Configurable Hyperparameters
For a comprehensive list of configurable hyperparameters, please refer to the `train_tokenizer.py` script. The script supports four major configuration components:
1. **model**: Select a model for post-training and pass the model checkpoint.
2. **data**: Define batch size and dataloader related hyper-parameters.
3. **trainer**: Define the training loop.
4. **optim**: Specify the post-training optimizer hyperparameters.
You can configure any hyperparameter of these four components by setting the value in the launch script using the following format:
```bash
model.jit_ckpt_pth=<your/desired/path>
trainer.max_epochs=<your/desired/epochs>
```
Adjust the values as needed to suit your training requirements. After a few hundred iterations, you should observe that the 'loss' reported in Weights & Biases (`wandb`) starts decreasing.
<p align="center">
<img src="./assets/loss.png" alt="Image description" width="50%">
</p> | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/tokenizer/nemo/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/tokenizer/nemo/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 8312
} |
# Cosmos Autoregressive-based World Foundation Models: NeMo Framework User Guide
Learn how to [run inference](#run-inference) with Cosmos Autoregressive-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.
## Model Support Matrix
The NeMo Framework supports the following Cosmos Autoregressive (AR) models. Review the available models and their compute requirements for post-training and inference to determine the best model for your use case.
| Model Name | Model Status | Compute Requirements for Inference | Multi-GPU Support |
|----------------------------------------------|------------------|------------------------------------------|---------|
| Cosmos-1.0-Autoregressive-4B | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |
| Cosmos-1.0-Autoregressive-12B | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |
| Cosmos-1.0-Autoregressive-5B-Video2World | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |
| Cosmos-1.0-Autoregressive-13B-Video2World | **Supported** | 1 NVIDIA GPU* | **Coming Soon** |
**\*** `H100-80GB` or `A100-80GB` GPUs are recommended.
## Post-Training Inference Support Matrix
Cosmos Autoregressive-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:
| Post-training Task | Inference Support Status |
|-------------------------|--------------------|
| General post-training | **Supported** |
| Instruction control | **Supported** |
| Action control | **Coming Soon** |
| Camera control | **Coming Soon** |
| Multi-view generation | **Coming Soon** |
| Multi-view generation with vehicle trajectory control | **Coming Soon** |
## Prerequisites
### 1. Review General Requirements
- System Configuration
- **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.
- **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).
- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.
- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.
### 2. Clone the Cosmos Repository
```bash
git clone [email protected]:NVIDIA/Cosmos.git
```
### 3. Start the Container
The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos AR models.
Run the following command to download and start the container:
```bash
docker run --ipc=host -it --gpus=all \
-v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \
nvcr.io/nvidia/nemo:25.02.rc1 bash
```
### 4. Download Checkpoints
To help you get started, we've provided a [download script](../download_autoregressive_nemo.py) to get the Cosmos Autoregressive checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.
1. Set the following environment variables:
```bash
# You must set HF_HOME before running this script.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
```
2. Run the following command to download the models:
```bash
cd /workspace/Cosmos
python cosmos1/models/autoregressive/nemo/download_autoregressive_nemo.py
```
## Run Inference
Running inference with Cosmos AR models lets you predict video frames and generate a new video that continues the scene from a given input video.
In this guide, we'll use this [example inference script](./general.py) to tokenize the input video into a sequence of tokens, which serve as prompts for the model. The model then generates new tokens representing the next set of frames. Finally, the new tokens are decoded back into video format. Only the last 9 frames of the input video are used to generate the next 24 frames.
### Run the Inference Script with Base Models
#### 4B and 12B Models
Complete the following steps to run inference on the 4B model.
1. Set the following environment variables:
```bash
# Install required packages
pip install --no-cache-dir imageio[ffmpeg] pyav iopath better_profanity peft git+https://github.com/NVlabs/Pytorch_Retinaface.git@b843f45
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Path to the the mp4 file (In git-lfs)
export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4
```
2. Run the following command:
```bash
cd /workspace/Cosmos
git lfs pull $INPUT_DATA
NVTE_FLASH_ATTN=1 \
NVTE_FUSED_ATTN=0 \
NVTE_UNFUSED_ATTN=0 \
torchrun --nproc-per-node 1 cosmos1/models/autoregressive/nemo/inference/general.py \
--input_image_or_video_path $INPUT_DATA \
--video_save_name "Cosmos-1.0-Autoregressive-4B.mp4" \
--ar_model_dir nvidia/Cosmos-1.0-Autoregressive-4B
```
#### 5B and 13B Models
Complete the following steps to run inference on the 5B model.
1. Set the following environment variables:
```bash
# Install required packages
pip install --no-cache-dir imageio[ffmpeg] pyav iopath better_profanity peft git+https://github.com/NVlabs/Pytorch_Retinaface.git@b843f45
export HF_TOKEN=<YOUR HF TOKEN>
export HF_HOME="<path/to/store/checkpoints>"
# Path to the the mp4 file (In git-lfs)
export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4
```
2. Run the following command:
```bash
cd /workspace/Cosmos
git lfs pull $INPUT_DATA
NVTE_FLASH_ATTN=1 \
NVTE_FUSED_ATTN=0 \
NVTE_UNFUSED_ATTN=0 \
python3 cosmos1/models/autoregressive/nemo/inference/video2world.py \
--input_type video \
--input_image_or_video_path 'cosmos1/models/autoregressive/assets/v1p0/input.mp4' \
--prompt "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \
--disable_diffusion_decoder \
--ar_model_dir nvidia/Cosmos-1.0-Autoregressive-5B-Video2World
```
### Run the Inference Script with Post-trained Models
You must [create a post-trained model](../post_training/README.md) before completing this section.
#### 4B and 12B Models
Complete the following steps to generate a new output video using a post-trained Base model.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Inference with post-trained model.
# NOTE: Dont use the checkpoint with -last suffix.
export NEMO_CHECKPOINT=./logs/default/checkpoints/epoch\=0-step\=9
# Path to the the mp4 file (In git-lfs)
export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4
```
2. Run the following command:
```bash
cd /workspace/Cosmos
git lfs pull $INPUT_DATA
# change --ar_model_dir to a post-trained checkpoint under ./logs/default/checkpoints/
NVTE_FLASH_ATTN=1 \
NVTE_FUSED_ATTN=0 \
NVTE_UNFUSED_ATTN=0 \
torchrun --nproc-per-node 1 cosmos1/models/autoregressive/nemo/inference/general.py \
--input_image_or_video_path $INPUT_DATA \
--video_save_name "Cosmos-1.0-Autoregressive-4B.mp4" \
--ar_model_dir "$NEMO_CHECKPOINT"
```
#### 5B and 13B Models
Complete the following steps to generate a new output video using a post-trained Video2World model.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Inference with post-trained model.
# NOTE: Dont use the checkpoint with -last suffix.
export NEMO_CHECKPOINT=./logs/default/checkpoints/epoch\=2-step\=9-last
# Path to the the mp4 file (In git-lfs)
export INPUT_DATA=cosmos1/models/autoregressive/assets/v1p0/input.mp4
```
2. Run the following command:
```bash
cd /workspace/Cosmos
git lfs pull $INPUT_DATA
# change --ar_model_dir to a post-trained checkpoint under ./logs/default/checkpoints/
NVTE_FLASH_ATTN=1 \
NVTE_FUSED_ATTN=0 \
NVTE_UNFUSED_ATTN=0 \
python3 cosmos1/models/autoregressive/nemo/inference/video2world.py \
--input_image_or_video_path $INPUT_DATA \
--video_save_name "Cosmos-1.0-Autoregressive-5B-Video2World.mp4" \
--prompt "A video recorded from a moving vehicle's perspective, capturing roads, buildings, landscapes, and changing weather and lighting conditions." \
--ar_model_dir "$NEMO_CHECKPOINT"
```
#### Example Output
The following output is an example video generated from the post-trained model using [`general.py`](./general.py):
<video src="https://github.com/user-attachments/assets/e744a5a4-2ce0-4de3-9497-7152b25c9022">
Your browser doesn't support the video tag.
</video>
Generated videos are saved at the location configured in the `--video_save_name` parameter.
The input video used to generate this video can be found in `cosmos1/models/autoregressive/assets/v1p0/input.mp4`.
> **Disclaimer**:
> The post-training example in this documentation is a demonstration of general post-training and not a guaranteed recipe for success. Post-training outcomes depend heavily on the quality and diversity of the dataset. To achieve good results, ensure your dataset is clean, well-structured, diverse, and properly labeled. Poorly prepared data can lead to issues like overfitting, bias, or poor performance. Carefully curate your dataset to reflect the desired use case for reliable results.
### Configuration Options
The following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements
| Parameter | Description | Default |
|--------------------------------|---------------------------------------------------------------------------------|---------|
| `--input_type` | The input type (image or video) | `video` |
| `--input_image_or_video_path` | Path to the input video to run inference | `cosmos1/models/autoregressive/assets/v1p0/input.mp4` |
| `--video_save_name` | Path to generated video | `./nemo_generated_video.mp4` |
| `--ar_model_dir` | Model name or path to model `nvidia/Cosmos-1.0-Autoregressive-4B` or `nvidia/Cosmos-1.0-Autoregressive-12B` | `nvidia/Cosmos-1.0-Autoregressive-4B` |
| `--encoder_path` | Path to encoder | `nvidia/Cosmos-1.0-Tokenizer-DV8x16x16` |
| `--decoder_path` | Path to the decoder | `nvidia/Cosmos-1.0-Tokenizer-DV8x16x1"` |
| `--guardrail_dir` | Path to guardrails | `nvidia/Cosmos-1.0-Guardrail` |
| `--top_p` | Top-p inference parameter | `0.9` |
| `--temperature` | Sampling temperature | `1` |
| `--disable_diffusion_decoder` | Disables running diffusion decoder on the generated result | `False` | | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/autoregressive/nemo/inference/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/inference/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 11866
} |
# Cosmos Autoregressive-based World Foundation Models: NeMo Framework User Guide
Learn how to [post-train](#post-train) Cosmos Autoregressive-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.
## Model Support Matrix
The NeMo Framework supports the following Cosmos Autoregressive (AR) models. Review the available models and their compute requirements for post-training and inference to determine the best model for your use case.
| Model Name | Model Status | Compute Requirements for Post-Training |
|-------------------------|----------------------------|-------------------------------------------|
| Cosmos-1.0-Autoregressive-4B | **Supported** | 2 NVIDIA GPUs* |
| Cosmos-1.0-Autoregressive-12B | **Supported** | 8 NVIDIA GPUs* |
| Cosmos-1.0-Autoregressive-5B-Video2World | **Supported** | 2 NVIDIA GPUs* |
| Cosmos-1.0-Autoregressive-13B-Video2World | **Supported** | 8 NVIDIA GPUs* |
**\*** `H100-80GB` or `A100-80GB` GPUs are recommended.
## Post-Training Support Matrix
Cosmos Autoregressive-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:
| Post-training Task | Support Status |
|-------------------------|--------------------|
| General post-training | **Supported** |
| Instruction control | **Coming Soon** |
| Action control | **Coming Soon** |
| Camera control | **Coming Soon** |
| Multi-view generation | **Coming Soon** |
| Multi-view generation with vehicle trajectory control | **Coming Soon** |
## Prerequisites
### 1. Review General Requirements
- System Configuration
- **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.
- **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).
- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.
- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.
### 2. Clone the Cosmos Repository
```bash
git clone [email protected]:NVIDIA/Cosmos.git
```
### 3. Start the Container
The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos AR models.
Run the following command to download and start the container:
```bash
docker run --ipc=host -it --gpus=all \
-v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \
nvcr.io/nvidia/nemo:25.02.rc1 bash
```
### 4. Download Checkpoints
To help you get started, we've provided a [download script](../download_autoregressive_nemo.py) to get the Cosmos Autoregressive checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.
1. Set the following environment variables:
```bash
# You must set HF_HOME before running this script.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
```
2. Run the following command to download the models:
```bash
cd /workspace/Cosmos
python cosmos1/models/autoregressive/nemo/download_autoregressive_nemo.py
```
## Post-train
Post-training a Cosmos Autoregressive-based WFM enables you to train the model to generate videos using frame predictions that are more specific to your Physical AI use case.
For example, if you want to generate action sequences for a specific robot, you can post-train the model to generate videos that are more aligned with typical actions/outcomes for that robot.
There are 3 steps to post-training: preparing a dataset, preprocessing the data, and post-training the model.
### 1. Prepare a Dataset
The first step is to prepare a dataset. Post-training a Cosmos-1.0-Autoregressive model enables you to get better video-frame predictions for your specific use case.
You must provide a folder containing a collection of videos in **MP4 format**, preferably 720p. In this guide, we'll use the sample videos located in the `cosmos1/models/autoregressive/assets/v1p0/batch_inputs` directory.
### 2. Preprocess Data
#### 4B and 12B Models
The second step is to preprocess the data to create an [indexed dataset](https://github.com/NVIDIA/Megatron-LM/tree/main/megatron/core/datasets).
The `IndexedDataset` class is the lowest-level data interface in Megatron Core and creates a `.bin` and `.idx` file.
Before proceeding, ensure all videos are in **RGB format**. Complete the following steps to preprocess the data.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Path to Raw mp4 videos.
export RAW_DATA="cosmos1/models/autoregressive/assets/v1p0/batch_inputs"
# Path to Processed Dataset.
export OUTPUT_PREFIX="./indexed_videos"
```
2. Run the following command to preprocess the data:
```bash
cd /workspace/Cosmos
git lfs pull --include=$RAW_DATA
python cosmos1/models/autoregressive/nemo/post_training/prepare_dataset.py \
--input_videos_dir $RAW_DATA \
--output_prefix $OUTPUT_PREFIX
```
Executing the [data preprocessing script](./prepare_dataset.py) for the base model generates the following files for each video:
- **`[i].idx` File**: This file contains metadata at the dataset level:
- **Index Header**: Ensures backward compatibility.
- **Index Version**: Maintains backward compatibility.
- **Data Type Code**: Numeric code indicating the data type used in the data file.
- **Sequence Count**: Total number of sequences in the dataset.
- **Document Count**: Total number of documents in the dataset.
- **`[i].bin` File**: This file includes metadata at the document and sequence levels:
- **Elements per Sequence**: Number of elements in each sequence.
- **Byte Offset per Sequence**: Pointer indicating the start of each sequence.
- **Sequence Index Range**: Consecutive index range `[...)` for each document.
#### 5B and 13B Models
The second step is to preprocess the data to pre compute the text and video embeddings for finetuning..
Before proceeding, ensure all videos are in **RGB format**. Complete the following steps to preprocess the data.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Path to Raw mp4 videos.
export RAW_DATA="cosmos1/models/autoregressive/assets/v1p0/batch_inputs"
# Path to Processed Dataset.
export OUTPUT_PREFIX="./indexed_videos"
```
2. Run the following command to preprocess the data:
```bash
cd /workspace/Cosmos
git lfs pull --include=$RAW_DATA
python3 cosmos1/models/autoregressive/nemo/post_training/video2world_prepare_dataset.py \
--input_jsonl $RAW_DATA/video2world.jsonl \
--output_dir $OUTPUT_PREFIX
```
Executing the [data preprocessing script](./video2world_prepare_dataset.py) for the base model generates
Executing the [data preprocessing script](./prepare_dataset.py) for the base model generates the following files for each video:
- **`[i].pt` File**: This file contains video tokens or prompt embeddings:
- It has a format `<train/test/val>_<prompt/video>_<idx>.pt`
- **`[i]metadata.json` File**: This file includes metadata:
- It tells you the number of train test and validation samples
### 3. Post-train the Model
The third step is to post-train the model. This step uses NeMo Framework's data and model parallelism capabilities to train the model on the post-training samples. This is accomplished by utilizing Tensor Parallelism.
- **Tensor Parallelism**: Spreads the parameter tensor of individual layers across GPUs.
#### Run the Post-training Script
##### 4B AND 12B Models
Complete the following steps to post-train the Cosmos-1.0-Autoregressive-4B model.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Number of GPU devices available for post-training. At least 2 for 4B and 8 for 12B.
export NUM_DEVICES=2
# Optionally, you can monitor training progress with Weights and Biases (wandb).
export WANDB_API_KEY="</your/wandb/api/key>"
export WANDB_PROJECT_NAME="cosmos-autoregressive-nemo-finetuning"
export WANDB_RUN_ID="cosmos_autoregressive_4b_finetune"
```
2. Run the following command for Cosmos-1.0-Autoregressive-4B post-training:
```bash
torchrun --nproc-per-node $NUM_DEVICES cosmos1/models/autoregressive/nemo/post_training/general.py \
--data_path $OUTPUT_PREFIX \
--split_string 4,1,1 \
--log_dir ./logs \
--max_steps 10 --save_every_n_steps 5 \
--tensor_model_parallel_size $NUM_DEVICES \
--model_path nvidia/Cosmos-1.0-Autoregressive-4B
```
3. You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model).
##### 5B and 13B Models
Complete the following steps to post-train the Cosmos-1.0-Autoregressive-5B model.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Number of GPU devices available for post-training. At least 4 for 5B and 8 for 13B.
export NUM_DEVICES=4
# Optionally, you can monitor training progress with Weights and Biases (wandb).
export WANDB_API_KEY="</your/wandb/api/key>"
export WANDB_PROJECT_NAME="cosmos-autoregressive-nemo-finetuning"
export WANDB_RUN_ID="cosmos_autoregressive_5b_finetune"
```
2. Run the following command for Cosmos-1.0-Autoregressive-5B-Video2World post-training:
```bash
torchrun --nproc-per-node $NUM_DEVICES \
cosmos1/models/autoregressive/nemo/post_training/video2world_finetuning.py \
--data_path $OUTPUT_PREFIX \
--log_dir ./logs \
--max_steps 10 --save_every_n_steps 5 \
--tensor_model_parallel_size $NUM_DEVICES \
--model_path nvidia/Cosmos-1.0-Autoregressive-5B-Video2World
```
3. You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model).
#### Configuration Options
Before getting started, review the following parameters that made available to the script. You can adjust these parameters to optimize performance based on your specific requirements.
| Parameter | Description | Default |
|---|---|---|
| `--data_path` | Specifies the location of your preprocessed dataset. Ensure this path points to the directory containing your `.bin` and `.idx` files. | `/path/to/data` |
| `--model_path` | Specifies the directory to the cosmos model to run post-training on. | `nvidia/Cosmos-1.0-Autoregressive-4B` |
| `--index_mapping_dir` | Specifies the directory to store the indexed dataset. | `./index_mapping` |
| `--log_dir` | Specifies the directory to store the logs and checkpoints. | `./log_dir` |
| `--split_string` | Specifies the data split ratios for training, validation, and testing. (Only valid for Base Model (4B and 12B)) | `4,1,1` |
| `--tensor_model_parallel_size` | Controls the number of GPUs used for model parallelism. Increase this number to scale up, ensuring your hardware can support the additional load. | `2` |
| `--max_steps` | Defines the total number of training steps. Adjust based on training duration and storage capacity. | `100` |
| `--save_every_n_steps` | Defines how often checkpoints are saved. Adjust based on training duration and storage capacity. | `10` |
| `--global_batch_size` | Tweaks to optimize memory usage and training speed. Larger batch sizes may improve convergence but require more memory. | `2` |
| `--micro_batch_size` | Tweaks to optimize memory usage and training speed. Larger batch sizes may improve convergence but require more memory. | `1` |
| `--lr` | Sets the learning rate. A common starting point is `5e-5`, but this can be adjusted based on model performance and convergence behavior. | `5e-5` |
| `--max_epochs` | The maximum number of epochs to run during post-training | `10` | | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/autoregressive/nemo/post_training/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/autoregressive/nemo/post_training/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 12643
} |
# Cosmos Diffusion-based World Foundation Models: NeMo Framework User Guide
Learn how to [run inference](#inference) with Cosmos Diffusion-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.
## Model Support Matrix
The NeMo Framework supports the following Cosmos Diffusion models. Review the available models and their compute requirements for post-tuning and inference to determine the best model for your use case.
| Model Name | Model Status | Compute Requirements for Inference | Multi-GPU Support
|----------------------------------------------|------------------|------------------------------------------|---------|
| Cosmos-1.0-Diffusion-7B-Text2World | **Supported** | 1 NVIDIA GPU* | **Supported** |
| Cosmos-1.0-Diffusion-14B-Text2World | **Supported** | 1 NVIDIA GPU* | **Supported** |
| Cosmos-1.0-Diffusion-7B-Video2World | **Supported** | 1 NVIDIA GPU* | **Supported** |
| Cosmos-1.0-Diffusion-14B-Video2WorldB | **Supported** | 1 NVIDIA GPU* | **Supported** |
**\*** `H100-80GB` or `A100-80GB` GPUs are recommended.
## Post-Trained Model Inference Support Matrix
Cosmos Diffusion-based WFMs can also be post-trained for a variety of Physical AI tasks and used for inference. Review the following table for a list of available Physical AI post-training tasks:
| Post-training Task | Inference Support Status |
|-------------------------|--------------------|
| General post-training | **Supported** |
| Instruction control | **Coming Soon** |
| Action control | **Coming Soon** |
| Camera control | **Coming Soon** |
| Multi-view generation | **Coming Soon** |
| Multi-view generation with vehicle trajectory control | **Coming Soon** |
## Prerequisites
### 1. Review General Requirements
- System Configuration
- **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.
- **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).
- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.
- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.
### 2. Clone the Cosmos Repository
```bash
git clone [email protected]:NVIDIA/Cosmos.git
```
### 3. Start the Container
The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Diffusion models.
Run the following command to download and start the container:
```bash
docker run --ipc=host -it --gpus=all \
-v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \
nvcr.io/nvidia/nemo:cosmos.1.0.1 bash
```
### 4. Download Checkpoints
To help you get started, we've provided a [download script](../download_diffusion_nemo.py) to get the Cosmos Diffusion Text2World and Video2World checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.
1. Set the following environment variables:
```bash
# You must set HF_HOME before running this script.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
```
2. Run the following command to download the models:
```bash
cd /workspace/Cosmos
python cosmos1/models/diffusion/nemo/download_diffusion_nemo.py
```
## Run Inference
Running inference with Cosmos Diffusion Text2World models lets you generate a video conditioned on a text prompt. With the Video2World models, you can generate a video conditioned on a text prompt as well as on an image or video. Note that when supplying an image or video for conditioning the following requirements must be met:
- **Video**: The video must be less than 9 frames long
- **Image**: The image must be either PNG or JPEG format and have one of the following extensions: `.png`, `.jpg`, or `.jpeg`
Our inference script enables accelerated world generation with context parallel. We use context parallelism to split the diffusion process across multiple GPUs, providing near-linear scaling efficiency. Our diffusion pipeline also allows the user to set a variety of hyperparameters including the random seed, classifier-free guidance scale, negative prompt, video resolution, and video fps.
General post-training is essentially a continuation of pre-training. To perform inference with models that have been post-trained with general post-training, you can set the `subject_name` parameter to the subject the model was post-trained on. The `prompt` and `conditioned_image_or_video_path` parameters are then used to provide the setting and describe the events in the generated world. The final prompt will be "A video of sks `{subject_name}`. `{prompt}`". We can also use [inference/general.py](./general.py) or [inference/video2world.py](./video2world.py) to perform inference on the base models since the model architectures are the same as the general post-trained models.
We also provide the option to upsample the `prompt` and make it more detailed. This can improve the quality of the generated world. Note that for Video2World generation, currently the LLM only looks at your text prompt to upsample the initial prompt, and it does not consider your input image/video for prompt upsampling. We will add text + image processing for prompt upsampling in the near future.
### Run the Inference Script with Base Models
#### Text2World
Complete the following steps to generate a new output video of a robot cooking.
1. Set the following environment variables:
```bash
# HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.
export NUM_DEVICES=1
export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))
# Prompt describing world scene and actions taken by subject (if provided).
export PROMPT="The teal robot is cooking food in a kitchen. Steam rises from a simmering pot as the robot chops vegetables on a worn wooden cutting board. Copper pans hang from an overhead rack, catching glints of afternoon light, while a well-loved cast iron skillet sits on the stovetop next to scattered measuring spoons and a half-empty bottle of olive oil."
```
2. Run the following command:
```bash
NVTE_FUSED_ATTN=0 \
torchrun --nproc_per_node=$NUM_DEVICES cosmos1/models/diffusion/nemo/inference/general.py \
--model Cosmos-1.0-Diffusion-7B-Text2World \
--cp_size $NUM_DEVICES \
--num_devices $NUM_DEVICES \
--video_save_path "Cosmos-1.0-Diffusion-7B-Text2World.mp4" \
--guidance 7 \
--seed 1 \
--prompt "$PROMPT" \
--enable_prompt_upsampler
```
#### Video2World
Complete the following steps to generate a new output video conditioned on an input video and a text prompt using the Video2World models.
1. Set the following environment variables:
```bash
# HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.
export NUM_DEVICES=1
export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))
# Prompt describing world scene and actions taken by subject (if provided).
export PROMPT="<Supply a prompt here>"
export CONDITIONED_IMAGE_OR_VIDEO="<Path to conditioned image or video>"
```
2. Run the following command:
```bash
NVTE_FUSED_ATTN=0 \
torchrun --nproc_per_node=$NUM_DEVICES cosmos1/models/diffusion/nemo/inference/video2world.py \
--model Cosmos-1.0-Diffusion-7B-Video2World \
--cp_size $NUM_DEVICES \
--num_devices $NUM_DEVICES \
--video_save_path "Cosmos-1.0-Diffusion-7B-Video2World.mp4" \
--guidance 7 \
--seed 1 \
--prompt "$PROMPT" \
--conditioned_image_or_video_path "$CONDITIONED_IMAGE_OR_VIDEO" \
--num_input_frames 9 \
--enable_prompt_upsampler
```
### Run the Inference Script with Post-trained Models
Create a post-trained model first, by using the instructions [here](../post_training/README.md)
Then complete the following steps to generate a new output video from this model.
#### Text2World
1. Set the following environment variables:
```bash
# HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Inference with post-trained model. Find post-trained model under nemo_experiments. Example path:
export NEMO_CHECKPOINT=nemo_experiments/cosmos_diffusion_7b_text2world_finetune/default/2024-12-17_01-28-03/checkpoints/epoch=39-step=199/weights
# Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.
export NUM_DEVICES=1
export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))
# Prompt describing world scene and actions taken by subject (if provided).
export PROMPT="The teal robot is cooking food in a kitchen. Steam rises from a simmering pot as the robot chops vegetables on a worn wooden cutting board. Copper pans hang from an overhead rack, catching glints of afternoon light, while a well-loved cast iron skillet sits on the stovetop next to scattered measuring spoons and a half-empty bottle of olive oil."
```
2. Run the following command:
```bash
NVTE_FUSED_ATTN=0 \
torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/inference/general.py \
--model Cosmos-1.0-Diffusion-7B-Text2World \
--nemo_checkpoint "$NEMO_CHECKPOINT" \
--cp_size $NUM_DEVICES \
--num_devices $NUM_DEVICES \
--video_save_path "Cosmos-1.0-Diffusion-7B-Text2World.mp4" \
--guidance 7 \
--seed 1 \
--prompt "$PROMPT" \
--subject_name "teal robot" \
--enable_prompt_upsampler
```
##### Example Output
The following output is an example video generated from the post-trained model using [`general.py`](./inference/general.py):
<video src="https://github.com/user-attachments/assets/a2b5bc7e-4e0a-4514-a04e-919281cee6fa">
Your browser does not support the video tag.
</video>
##### Configuration Options
The following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements. The model inference hyperparameters listed below have the same functionality as in [Cosmos Diffusion Common Parameters](cosmos1/models/diffusion/README.md#parameters).
| Parameter | Description | Default |
|--------------------------------|---------------------------------------------------------------------------------|---------|
| `--model` | Name of Cosmos Text2World Diffusion model to use for inference. | `Cosmos-1.0-Diffusion-7B-Text2World` |
| `--prompt` | Prompt which the sampled video is conditioned on. Tries to generate what is mentioned in the prompt. | *None* (user must provide) |
| `--negative_prompt` | Negative prompt for improved quality | "The video captures a series of frames showing ugly scenes..." |
| `--subject_name` | Name of the subject the model was post-trained on. This can be left empty for base model inference. | *None* |
| `--guidance` | A control mechanism that determines how closely the model follows specified conditions (prompt) during the generation process. We recommend starting with a guidance of 7 and increasing it later if necessary. | 7 |
| `--sampler` | Sampling method used for generation. Only supports **RES** sampler from [this paper](https://arxiv.org/pdf/2308.02157). | `RES` |
| `--video_save_path` | Location to save generated videos. | `Cosmos-1.0-Diffusion-7B-Text2World.mp4` |
| `--fps` | Frames-per-second of generated video. Cosmos Diffusion models generate videos at 24 FPS by default. | 24 |
| `--height` | Height of the generated video. Set to 704 pixels by default, which is the largest supported video height for Cosmos Diffusion. | 704 |
| `--width` | Width of the generated video. Set to 1280 pixels by default, which is the largest supported video width for Cosmos Diffusion. | 1280 |
| `--seed` | Random seed for generating initial noise sample. Changing this will create a different video for the same prompt. Keep the seed fixed to maintain deterministic video generations. | 1 |
| `--num_devices` | [1–8] Number of GPUs to use in parallel for inference. | 8 |
| `--cp_size` | [1–8] Number of context parallel ranks to spawn for parallelized inference. Must be equal to `num_devices`. | 8 |
#### Video2World
1. Set the following environment variables:
```bash
# HuggingFace Cache to save T5 text encoder, video tokenizer, prompt upsampler, and guardrails weights.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Inference with post-trained model. Find post-trained model under nemo_experiments. Example path:
export NEMO_CHECKPOINT=nemo_experiments/cosmos_diffusion_7b_video2world_finetune/default/2025-02-03_11-57-33/checkpoints/epoch=39-step=199/weights
# Number of GPU devices available for inference. Supports up to 8 GPUs for accelerated inference.
export NUM_DEVICES=1
export CUDA_VISIBLE_DEVICES=$(seq -s, 0 $((NUM_DEVICES - 1)))
export PROMPT="<Supply a prompt here>"
export CONDITIONED_IMAGE_OR_VIDEO="<Path to conditioned image or video>"
```
2. Run the following command:
```bash
NVTE_FUSED_ATTN=0 \
torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/inference/video2world.py \
--model Cosmos-1.0-Diffusion-7B-Video2World \
--nemo_checkpoint "$NEMO_CHECKPOINT" \
--cp_size $NUM_DEVICES \
--num_devices $NUM_DEVICES \
--video_save_path "Cosmos-1.0-Diffusion-7B-Video2World.mp4" \
--guidance 7 \
--seed 1 \
--prompt "$PROMPT" \
--conditioned_image_or_video_path "$CONDITIONED_IMAGE_OR_VIDEO" \
--subject_name "teal robot" \
--enable_prompt_upsampler
##### Configuration Options
The following table details the parameters that can be modified for accelerated inference with NeMo. You can adjust these parameters to optimize performance based on your specific requirements. The model inference hyperparameters listed below have the same functionality as in [Cosmos Diffusion Common Parameters](cosmos1/models/diffusion/README.md#parameters).
| Parameter | Description | Default |
|--------------------------------|---------------------------------------------------------------------------------|---------|
| `--model` | Name of Cosmos Video2World Diffusion model to use for inference. | `Cosmos-1.0-Diffusion-7B-Video2World` |
| `--prompt` | Prompt which the sampled video is conditioned on. Tries to generate what is mentioned in the prompt. | *None* (user must provide) |
| `--conditioned_image_or_video_path` | Input video used for conditioning generations. | *None* (user must provide) |
| `--negative_prompt` | Negative prompt for improved quality | "The video captures a series of frames showing ugly scenes..." |
| `--subject_name` | Name of the subject the model was post-trained on. This can be left empty for base model inference. | *None* |
| `--guidance` | A control mechanism that determines how closely the model follows specified conditions (prompt) during the generation process. We recommend starting with a guidance of 7 and increasing it later if necessary. | 7 |
| `--sampler` | Sampling method used for generation. Only supports **RES** sampler from [this paper](https://arxiv.org/pdf/2308.02157). | `RES` |
| `--video_save_path` | Location to save generated videos. | `Cosmos-1.0-Diffusion-7B-Video2World.mp4` |
| `--fps` | Frames-per-second of generated video. Cosmos Diffusion models generate videos at 24 FPS by default. | 24 |
| `--height` | Height of the generated video. Set to 704 pixels by default, which is the largest supported video height for Cosmos Diffusion. | 704 |
| `--width` | Width of the generated video. Set to 1280 pixels by default, which is the largest supported video width for Cosmos Diffusion. | 1280 |
| `--seed` | Random seed for generating initial noise sample. Changing this will create a different video for the same prompt. Keep the seed fixed to maintain deterministic video generations. | 1 |
| `--num_devices` | [1–8] Number of GPUs to use in parallel for inference. | 8 |
| `--cp_size` | [1–8] Number of context parallel ranks to spawn for parallelized inference. Must be equal to `num_devices`. | 8 |
Generated videos are saved at the location configured in the `SAVE_PATH` parameter.
> **Tip**:
> For faster inference, you can remove the `--enable_prompt_upsampler` parameter, but this may degrade the generated result.
> **Disclaimer**:
> The post-training example in this documentation is a demonstration of general post-training and not a guaranteed recipe for success. Post-training outcomes depend heavily on the quality and diversity of the dataset. To achieve good results, ensure your dataset is clean, well-structured, diverse, and properly labeled. Poorly prepared data can lead to issues like overfitting, bias, or poor performance. Carefully curate your dataset to reflect the desired use case for reliable results. | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/diffusion/nemo/inference/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/inference/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 18975
} |
# Cosmos Diffusion-based World Foundation Models: NeMo Framework User Guide
Learn how to [post-train](#post-train) Cosmos Diffusion-based World Foundation Models (WFMs) using the [NVIDIA NeMo Framework](https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html) for your custom Physical AI tasks by following this guide.
## Model Support Matrix
The NeMo Framework supports the following Cosmos Diffusion models. Review the available models and their compute requirements for post-tuning and inference to determine the best model for your use case.
| Model Name | Model Status | Compute Requirements for Post-Training |
|----------------------------------------------|------------------|------------------------------------------|
| Cosmos-1.0-Diffusion-7B-Text2World | **Supported** | 8 NVIDIA GPUs* |
| Cosmos-1.0-Diffusion-14B-Text2World | **Supported** | 8 NVIDIA GPUs* |
| Cosmos-1.0-Diffusion-7B-Video2World | **Supported** | 8 NVIDIA GPUs* |
| Cosmos-1.0-Diffusion-14B-Video2WorldB | **Supported** | 8 NVIDIA GPUs* |
**\*** `H100-80GB` or `A100-80GB` GPUs are recommended.
## Post-Training Support Matrix
Cosmos Diffusion-based WFMs can be post-trained for a variety of Physical AI tasks. Review the following table for a list of available Physical AI post-training tasks:
| Post-training Task | Post-Training Support Status |
|-------------------------|--------------------|
| General post-training | **Supported** |
| Instruction control | **Coming Soon** |
| Action control | **Coming Soon** |
| Camera control | **Coming Soon** |
| Multi-view generation | **Coming Soon** |
| Multi-view generation with vehicle trajectory control | **Coming Soon** |
## Prerequisites
### 1. Review General Requirements
- System Configuration
- **NVIDIA GPU and driver**: Ensure you have access to the minimum compute required to run the model(s), as listed in the model support matrix.
- **Containerization Platform**: We recommend using Docker with NVIDIA Container Runtime (alternatively, you may use NVIDIA enroot).
- Get your [Hugging Face User Access Token](https://huggingface.co/docs/hub/en/security-tokens), which is required to obtain the Cosmos models for training and inference.
- Get your [Weights and Biases API Key](https://docs.wandb.ai/support/find_api_key/) for logging and tracking.
### 2. Clone the Cosmos Repository
```bash
git clone [email protected]:NVIDIA/Cosmos.git
```
### 3. Start the Container
The [NeMo Framework container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/nemo) supports post-training and inference for Cosmos Diffusion models.
Run the following command to download and start the container:
```bash
docker run --ipc=host -it --gpus=all \
-v $PATH_TO_COSMOS_REPO:/workspace/Cosmos \
nvcr.io/nvidia/nemo:cosmos.1.0.1 bash
```
### 4. Download Checkpoints
To help you get started, we've provided a [download script](../download_diffusion_nemo.py) to get the Cosmos Diffusion Text2World and Video2World checkpoints from Hugging Face. These checkpoints are in the NeMo distributed checkpoint format required to run post-training and inference with NeMo Framework.
1. Set the following environment variables:
```bash
# You must set HF_HOME before running this script.
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
```
2. Run the following command to download the models:
```bash
cd /workspace/Cosmos
python cosmos1/models/diffusion/nemo/download_diffusion_nemo.py
```
## Post-train
Post-training a Cosmos Diffusion-based WFM enables you to train the model to generate videos that are more specific to your Physical AI use case.
For example, if you want to generate action sequences for a specific robot, you can post-train the model to generate videos that are more aligned with typical actions/outcomes for that robot.
There are 3 steps to post-training: preparing a dataset, preprocessing the data, and post-training the model.
### 1. Prepare a Dataset
The first step is to prepare a dataset. Post-training a Cosmos-1.0-Diffusion-Text2World/Cosmos-1.0-Diffusion-Video2World model enables you to generate videos of a specific subject in new environments using a collection of input videos of that same subject as reference material.
You must provide a folder containing a collection of videos in **MP4 format**, preferably 720p. These videos should focus on the subject throughout the entire video so that each video chunk contains the subject.
Run the following command to download the sample videos used for post-training:
```bash
huggingface-cli download nvidia/Cosmos-NeMo-Assets --repo-type dataset --local-dir cosmos1/models/diffusion/assets/ --include "*.mp4*"
```
### 2. Preprocess Data for Single Subject Post-training
The second step is to preprocess the input videos. This generates the post-training samples and the metadata required for the post-training process by:
1. Selecting `N` chunks of 121 frames from each video, generating `N` post-training samples per video.
2. Encoding the 121 frames by first independently compressing the first frame and then applying an 8x temporal compression for the rest of the frames.
3. Generating `total_samples = # of videos x # of chunks` post-training samples.
Before proceeding, ensure all videos are in **RGB format**. Complete the following steps to generate the post-training samples and metadata for the robot dataset. Remember to follow the given prompt format by including the subject's name in the prompt. For example, if the subject is "robot," the prompt should read `"A video of sks robot."`.
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Path to Raw mp4 videos.
export RAW_DATA="cosmos1/models/diffusion/assets/nemo_diffusion_example_data"
# Path to Processed Dataset.
export CACHED_DATA="./cached_data" && mkdir -p $CACHED_DATA
```
2. Run the following command to preprocess the data:
```bash
python cosmos1/models/diffusion/nemo/post_training/prepare_dataset.py \
--dataset_path $RAW_DATA \
--output_path $CACHED_DATA \
--prompt "A video of sks teal robot." \
--num_chunks 500 \
```
Executing the [data preprocessing script](./prepare_dataset.py) generates the following files for each video (using `[i]` as the `index` of the video) at `$CACHED_DATA` path:
- **`[i].info.json`**: Metadata for the video sample.
- **`[i].t5_text_embeddings.pth`**: T5-generated text embedding for the video clip.
- **`[i].t5_text_mask.pth`**: Mask for T5 text embedding, set to all ones by default to use the entire text embedding.
- **`[i].video_latent.pth`**: 3D spatiotemporal video tokens generated from the video tokenizer.
- **`[i].conditioning_latent.pth`**: 3D spatiotemporal video tokens generated from the video tokenizer on the first nine frames of the input video. These conditioning latents are only used during Video2World training.
### 3. Preprocess Data for Robot Instruction (or other Custom Prompt) Post-training
Robot instruction post-training uses instructions as input prompts. Instructions are imperative prompts and correspond to the physical actions performed by the robot in a video. The instruction dataset processing workflow generalizes to any custom input prompt per video.
1. Create instruction dataset
Create a dataset folder containing videos and per video instructions in the following format:
```
robot_dataset
videos
id1.mp4
id2.mp4
...
instructions
id1.json
id2.json
```
- **`robot_dataset/videos/id1.mp4`**: video clip
- **`robot_dataset/instructions/id1.json`**: json file with key `language_instruction_0` mapping to a text instruction
2. Run the following command to preprocess the data:
```bash
python cosmos1/models/diffusion/nemo/post_training/prepare_instruction_dataset.py \
--dataset_path robot_dataset \
--output_path robot_dataset/processed \
--num_chunks 500
```
The output dataset is saved in `robot_dataset/processed/` in the same format described in the previous section.
### 3. Post-train the Model
The third step is to post-train the model. This step uses NeMo Framework's data and model parallelism capabilities to train the model on the post-training samples. This is accomplished by using utilizing Fully Sharded Data Parallel (FSDP) and Tensor Parallelism.
- **FSDP**: Distributes model parameters, optimizer states, and activations across all GPUs
- **Tensor Parallelism**: Spreads the parameter tensor of individual layers across GPUs.
> **NOTE**:
> For the 14B model, we also employ activation checkpointing to facilitate single-node training.
#### Run the Post-training Script
Complete the following steps to post-train the Cosmos-1.0-Diffusion-7B-Text2World or Cosmos-1.0-Diffusion-7B-Video2World models on the robot dataset using 8 GPUs.
##### Text2World
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Optionally, you can monitor training progress with Weights and Biases (wandb).
export WANDB_API_KEY="</your/wandb/api/key>"
export WANDB_PROJECT_NAME="cosmos-diffusion-nemo-post-training"
export WANDB_RUN_ID="cosmos_diffusion_7b_text2world_finetune"
```
2. Run the following command for Cosmos-Diffusion-Text2World-7B general post-training:
```bash
NVTE_FUSED_ATTN=0 \
CUDA_DEVICE_MAX_CONNECTIONS=1 \
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \
torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/post_training/general.py \
--yes \
--factory cosmos_diffusion_7b_text2world_finetune \
data.path=$CACHED_DATA \
trainer.max_steps=1000 \
optim.config.lr=1e-6
```
###### Configuration Options
Before getting started, review the following parameters made available to the script. You can adjust these parameters to optimize performance based on your specific requirements.
| Parameter | Description | Default |
|--------------------------------|---------------------------------------------------------------------------------|---------|
| `--factory` | recipe to use cosmos_diffusion_7b_text2world_finetune or cosmos_diffusion_14b_text2world_finetune for general post-training | cosmos_diffusion_7b_text2world_finetune |
| `data.path` | Path to processed post-training dataset (str). | None |
| `resume.restore_config.path` | Path to pre-trained Cosmos Diffusion NeMo distributed checkpoint (str). | None |
| `optim.config.lr` | Learning rate (float). | 1e-6 |
##### Video2World
1. Set the following environment variables:
```bash
export HF_TOKEN="<your/HF/access/token>"
export HF_HOME="<path/to/store/checkpoints>"
# Optionally, you can monitor training progress with Weights and Biases (wandb).
export WANDB_API_KEY="</your/wandb/api/key>"
export WANDB_PROJECT_NAME="cosmos-diffusion-nemo-post-training"
export WANDB_RUN_ID="cosmos_diffusion_7b_video2world_finetune"
```
2. Run the following command for Cosmos-Diffusion-Video2World-7B general post-training:
```bash
NVTE_FUSED_ATTN=0 \
CUDA_DEVICE_MAX_CONNECTIONS=1 \
PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \
torchrun --nproc_per_node=8 cosmos1/models/diffusion/nemo/post_training/video2world.py \
--yes \
--factory cosmos_diffusion_7b_video2world_finetune \
data.path=$CACHED_DATA \
trainer.max_steps=1000 \
optim.config.lr=1e-6
You can now run inference with your post-trained model using the instructions [here](../inference/README.md#run-the-inference-script-with-post-trained-model).
###### Configuration Options
Before getting started, review the following parameters made available to the script. You can adjust these parameters to optimize performance based on your specific requirements.
| Parameter | Description | Default |
|--------------------------------|---------------------------------------------------------------------------------|---------|
| `--factory` | recipe to use cosmos_diffusion_7b_video2world_finetune or cosmos_diffusion_14b_video2world_finetune for video2world post-training | cosmos_diffusion_7b_video2world_finetune |
| `data.path` | Path to processed post-training dataset (str). | None |
| `resume.restore_config.path` | Path to pre-trained Cosmos Diffusion NeMo distributed checkpoint (str). | None |
| `optim.config.lr` | Learning rate (float). | 1e-6 |
| `trainer.max_steps` | Max number of post-training steps (int). | 1000 |
| `log.log_dir` | Path to folder to save post-training logs and checkpoints (str). | None | | {
"source": "NVIDIA/Cosmos",
"title": "cosmos1/models/diffusion/nemo/post_training/README.md",
"url": "https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/diffusion/nemo/post_training/README.md",
"date": "2024-12-30T17:21:14",
"stars": 7578,
"description": "Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.",
"file_size": 13563
} |
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
<[email protected]>.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org),
version 2.0, available at
<https://www.contributor-covenant.org/version/2/0/code_of_conduct.html>.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
For answers to common questions about this code of conduct, see the FAQ at
<https://www.contributor-covenant.org/faq>. Translations are available at
<https://www.contributor-covenant.org/translations>. | {
"source": "cyclotruc/gitingest",
"title": "CODE_OF_CONDUCT.md",
"url": "https://github.com/cyclotruc/gitingest/blob/main/CODE_OF_CONDUCT.md",
"date": "2024-11-29T08:27:18",
"stars": 6697,
"description": "Replace 'hub' with 'ingest' in any github url to get a prompt-friendly extract of a codebase ",
"file_size": 5205
} |
# Contributing to Gitingest
Thanks for your interest in contributing to Gitingest! 🚀 Gitingest aims to be friendly for first time contributors, with a simple Python and HTML codebase. We would love your help to make it even better. If you need any help while working with the code, please reach out to us on [Discord](https://discord.com/invite/zerRaGK9EC).
## How to Contribute (non-technical)
- **Create an Issue**: If you find a bug or have an idea for a new feature, please [create an issue](https://github.com/cyclotruc/gitingest/issues/new) on GitHub. This will help us track and prioritize your request.
- **Spread the Word**: If you like Gitingest, please share it with your friends, colleagues, and on social media. This will help us grow the community and make Gitingest even better.
- **Use Gitingest**: The best feedback comes from real-world usage! If you encounter any issues or have ideas for improvement, please let us know by [creating an issue](https://github.com/cyclotruc/gitingest/issues/new) on GitHub or by reaching out to us on [Discord](https://discord.com/invite/zerRaGK9EC).
## How to submit a Pull Request
1. Fork the repository.
2. Clone the forked repository:
```bash
git clone https://github.com/cyclotruc/gitingest.git
cd gitingest
```
3. Set up the development environment and install dependencies:
```bash
python -m venv .venv
source .venv/bin/activate
pip install -r requirements-dev.txt
pre-commit install
```
4. Create a new branch for your changes:
```bash
git checkout -b your-branch
```
5. Make your changes. Make sure to add corresponding tests for your changes.
6. Stage your changes:
```bash
git add .
```
7. Run the tests:
```bash
pytest
```
8. Navigate to src folder
1. Build the Docker image
``` bash
cd src
```
2. Run the local web server:
``` bash
uvicorn server.main:app
```
3. Open your browser and navigate to `http://localhost:8000` to see the app running.
9. Confirm that everything is working as expected. If you encounter any issues, fix them and repeat steps 6 to 8.
10. Commit your changes:
```bash
git commit -m "Your commit message"
```
If `pre-commit` raises any issues, fix them and repeat steps 6 to 9.
11. Push your changes:
```bash
git push origin your-branch
```
12. Open a pull request on GitHub. Make sure to include a detailed description of your changes.
13. Wait for the maintainers to review your pull request. If there are any issues, fix them and repeat steps 6 to 12.
*(Optional) Invite project maintainer to your branch for easier collaboration.* | {
"source": "cyclotruc/gitingest",
"title": "CONTRIBUTING.md",
"url": "https://github.com/cyclotruc/gitingest/blob/main/CONTRIBUTING.md",
"date": "2024-11-29T08:27:18",
"stars": 6697,
"description": "Replace 'hub' with 'ingest' in any github url to get a prompt-friendly extract of a codebase ",
"file_size": 2698
} |
# Gitingest
[](https://gitingest.com)
[](https://github.com/cyclotruc/gitingest/blob/main/LICENSE)
[](https://badge.fury.io/py/gitingest)
[](https://github.com/cyclotruc/gitingest)
[](https://pepy.tech/project/gitingest)
[](https://discord.com/invite/zerRaGK9EC)
Turn any Git repository into a prompt-friendly text ingest for LLMs.
You can also replace `hub` with `ingest` in any GitHub URL to access the corresponding digest.
[gitingest.com](https://gitingest.com) · [Chrome Extension](https://chromewebstore.google.com/detail/adfjahbijlkjfoicpjkhjicpjpjfaood) · [Firefox Add-on](https://addons.mozilla.org/firefox/addon/gitingest)
## 🚀 Features
- **Easy code context**: Get a text digest from a Git repository URL or a directory
- **Smart Formatting**: Optimized output format for LLM prompts
- **Statistics about**:
- File and directory structure
- Size of the extract
- Token count
- **CLI tool**: Run it as a shell command
- **Python package**: Import it in your code
## 📚 Requirements
- Python 3.7+
## 📦 Installation
``` bash
pip install gitingest
```
## 🧩 Browser Extension Usage
<!-- markdownlint-disable MD033 -->
<a href="https://chromewebstore.google.com/detail/adfjahbijlkjfoicpjkhjicpjpjfaood" target="_blank" title="Get Gitingest Extension from Chrome Web Store"><img height="48" src="https://github.com/user-attachments/assets/20a6e44b-fd46-4e6c-8ea6-aad436035753" alt="Available in the Chrome Web Store" /></a>
<a href="https://addons.mozilla.org/firefox/addon/gitingest" target="_blank" title="Get Gitingest Extension from Firefox Add-ons"><img height="48" src="https://github.com/user-attachments/assets/c0e99e6b-97cf-4af2-9737-099db7d3538b" alt="Get The Add-on for Firefox" /></a>
<a href="https://microsoftedge.microsoft.com/addons/detail/nfobhllgcekbmpifkjlopfdfdmljmipf" target="_blank" title="Get Gitingest Extension from Microsoft Edge Add-ons"><img height="48" src="https://github.com/user-attachments/assets/204157eb-4cae-4c0e-b2cb-db514419fd9e" alt="Get from the Edge Add-ons" /></a>
<!-- markdownlint-enable MD033 -->
The extension is open source at [lcandy2/gitingest-extension](https://github.com/lcandy2/gitingest-extension).
Issues and feature requests are welcome to the repo.
## 💡 Command line usage
The `gitingest` command line tool allows you to analyze codebases and create a text dump of their contents.
```bash
# Basic usage
gitingest /path/to/directory
# From URL
gitingest https://github.com/cyclotruc/gitingest
# See more options
gitingest --help
```
This will write the digest in a text file (default `digest.txt`) in your current working directory.
## 🐍 Python package usage
```python
# Synchronous usage
from gitingest import ingest
summary, tree, content = ingest("path/to/directory")
# or from URL
summary, tree, content = ingest("https://github.com/cyclotruc/gitingest")
```
By default, this won't write a file but can be enabled with the `output` argument.
```python
# Asynchronous usage
from gitingest import ingest_async
import asyncio
result = asyncio.run(ingest_async("path/to/directory"))
```
### Jupyter notebook usage
```python
from gitingest import ingest_async
# Use await directly in Jupyter
summary, tree, content = await ingest_async("path/to/directory")
```
This is because Jupyter notebooks are asynchronous by default.
## 🐳 Self-host
1. Build the image:
``` bash
docker build -t gitingest .
```
2. Run the container:
``` bash
docker run -d --name gitingest -p 8000:8000 gitingest
```
The application will be available at `http://localhost:8000`.
If you are hosting it on a domain, you can specify the allowed hostnames via env variable `ALLOWED_HOSTS`.
```bash
# Default: "gitingest.com, *.gitingest.com, localhost, 127.0.0.1".
ALLOWED_HOSTS="example.com, localhost, 127.0.0.1"
```
## 🤝 Contributing
### Non-technical ways to contribute
- **Create an Issue**: If you find a bug or have an idea for a new feature, please [create an issue](https://github.com/cyclotruc/gitingest/issues/new) on GitHub. This will help us track and prioritize your request.
- **Spread the Word**: If you like Gitingest, please share it with your friends, colleagues, and on social media. This will help us grow the community and make Gitingest even better.
- **Use Gitingest**: The best feedback comes from real-world usage! If you encounter any issues or have ideas for improvement, please let us know by [creating an issue](https://github.com/cyclotruc/gitingest/issues/new) on GitHub or by reaching out to us on [Discord](https://discord.com/invite/zerRaGK9EC).
### Technical ways to contribute
Gitingest aims to be friendly for first time contributors, with a simple Python and HTML codebase. If you need any help while working with the code, reach out to us on [Discord](https://discord.com/invite/zerRaGK9EC). For detailed instructions on how to make a pull request, see [CONTRIBUTING.md](./CONTRIBUTING.md).
## 🛠️ Stack
- [Tailwind CSS](https://tailwindcss.com) - Frontend
- [FastAPI](https://github.com/fastapi/fastapi) - Backend framework
- [Jinja2](https://jinja.palletsprojects.com) - HTML templating
- [tiktoken](https://github.com/openai/tiktoken) - Token estimation
- [posthog](https://github.com/PostHog/posthog) - Amazing analytics
### Looking for a JavaScript/Node package?
Check out the NPM alternative 📦 Repomix: <https://github.com/yamadashy/repomix>
## 🚀 Project Growth
[](https://star-history.com/#cyclotruc/gitingest&Date) | {
"source": "cyclotruc/gitingest",
"title": "README.md",
"url": "https://github.com/cyclotruc/gitingest/blob/main/README.md",
"date": "2024-11-29T08:27:18",
"stars": 6697,
"description": "Replace 'hub' with 'ingest' in any github url to get a prompt-friendly extract of a codebase ",
"file_size": 5959
} |
# Security Policy
## Reporting a Vulnerability
If you have discovered a vulnerability inside the project, report it privately at <[email protected]>. This way the maintainer can work on a proper fix without disclosing the problem to the public before it has been solved. | {
"source": "cyclotruc/gitingest",
"title": "SECURITY.md",
"url": "https://github.com/cyclotruc/gitingest/blob/main/SECURITY.md",
"date": "2024-11-29T08:27:18",
"stars": 6697,
"description": "Replace 'hub' with 'ingest' in any github url to get a prompt-friendly extract of a codebase ",
"file_size": 273
} |
<div align="center">
<img align="left" width="100" height="100" src="https://github.com/user-attachments/assets/1834fc25-42ef-4237-9feb-53a01c137e83" alt="">
# SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
[Cheng-Yen Yang](https://yangchris11.github.io), [Hsiang-Wei Huang](https://hsiangwei0903.github.io/), [Wenhao Chai](https://rese1f.github.io/), [Zhongyu Jiang](https://zhyjiang.github.io/#/), [Jenq-Neng Hwang](https://people.ece.uw.edu/hwang/)
[Information Processing Lab, University of Washington](https://ipl-uw.github.io/)
</div>
[](https://paperswithcode.com/sota/visual-object-tracking-on-lasot-ext?p=samurai-adapting-segment-anything-model-for-1)
[](https://paperswithcode.com/sota/visual-object-tracking-on-got-10k?p=samurai-adapting-segment-anything-model-for-1)
[](https://paperswithcode.com/sota/visual-object-tracking-on-needforspeed?p=samurai-adapting-segment-anything-model-for-1)
[](https://paperswithcode.com/sota/visual-object-tracking-on-lasot?p=samurai-adapting-segment-anything-model-for-1)
[](https://paperswithcode.com/sota/visual-object-tracking-on-otb-2015?p=samurai-adapting-segment-anything-model-for-1)
[[Arxiv]](https://arxiv.org/abs/2411.11922) [[Project Page]](https://yangchris11.github.io/samurai/) [[Raw Results]](https://drive.google.com/drive/folders/1ssiDmsC7mw5AiItYQG4poiR1JgRq305y?usp=sharing)
This repository is the official implementation of SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory
https://github.com/user-attachments/assets/9d368ca7-2e9b-4fed-9da0-d2efbf620d88
All rights are reserved to the copyright owners (TM & © Universal (2019)). This clip is not intended for commercial use and is solely for academic demonstration in a research paper. Original source can be found [here](https://www.youtube.com/watch?v=cwUzUzpG8aM&t=4s).
## News
- [ ] **Incoming**: Support vot-challenge toolkit intergration.
- [ ] **Incoming**: Release demo script to support inference on video (with mask prompt).
- [x] **2025/02/18**: Release multi-GPU inference script.
- [x] **2025/01/27**: Release [inference script](https://github.com/yangchris11/samurai/blob/master/sam2/tools/README.md#samurai-vos-inference) on VOS task (SA-V)!
- [x] **2024/11/21**: Release [demo script](https://github.com/yangchris11/samurai?tab=readme-ov-file#demo-on-custom-video) to support inference on video (bounding box prompt).
- [x] **2024/11/20** Release [inference script](https://github.com/yangchris11/samurai?tab=readme-ov-file#main-inference) on VOT task (LaSOT, LaSOT-ext, GOT-10k, UAV123, TrackingNet, OTB100)!
- [x] **2024/11/19**: Release [paper](https://arxiv.org/abs/2411.11922), [code](https://github.com/yangchris11/samurai), and [raw results](https://drive.google.com/drive/folders/1ssiDmsC7mw5AiItYQG4poiR1JgRq305y?usp=sharing)!
## Getting Started
#### SAMURAI Installation
SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://github.com/facebookresearch/sam2?tab=readme-ov-file) to install both PyTorch and TorchVision dependencies. You can install **the SAMURAI version** of SAM 2 on a GPU machine using:
```
cd sam2
pip install -e .
pip install -e ".[notebooks]"
```
Please see [INSTALL.md](https://github.com/facebookresearch/sam2/blob/main/INSTALL.md) from the original SAM 2 repository for FAQs on potential issues and solutions.
Install other requirements:
```
pip install matplotlib==3.7 tikzplotlib jpeg4py opencv-python lmdb pandas scipy loguru
```
#### SAM 2.1 Checkpoint Download
```
cd checkpoints && \
./download_ckpts.sh && \
cd ..
```
#### Data Preparation
Please prepare the data in the following format:
```
data/LaSOT
├── airplane/
│ ├── airplane-1/
│ │ ├── full_occlusion.txt
│ │ ├── groundtruth.txt
│ │ ├── img
│ │ ├── nlp.txt
│ │ └── out_of_view.txt
│ ├── airplane-2/
│ ├── airplane-3/
│ ├── ...
├── basketball
├── bear
├── bicycle
...
├── training_set.txt
└── testing_set.txt
```
#### Main Inference
```
python scripts/main_inference.py
```
## Demo on Custom Video
To run the demo with your custom video or frame directory, use the following examples:
**Note:** The `.txt` file contains a single line with the bounding box of the first frame in `x,y,w,h` format while the SAM 2 takes `x1,y1,x2,y2` format as bbox input.
### Input is Video File
```
python scripts/demo.py --video_path <your_video.mp4> --txt_path <path_to_first_frame_bbox.txt>
```
### Input is Frame Folder
```
# Only JPG images are supported
python scripts/demo.py --video_path <your_frame_directory> --txt_path <path_to_first_frame_bbox.txt>
```
## FAQs
**Question 1:** Does SAMURAI need training? [issue 34](https://github.com/yangchris11/samurai/issues/34)
**Answer 1:** Unlike real-life samurai, the proposed samurai do not require additional training. It is a zero-shot method, we directly use the weights from SAM 2.1 to conduct VOT experiments. The Kalman filter is used to estimate the current and future state (bounding box location and scale in our case) of a moving object based on measurements over time, it is a common approach that had been adopted in the field of tracking for a long time, which does not require any training. Please refer to code for more detail.
**Question 2:** Does SAMURAI support streaming input (e.g. webcam)?
**Answer 2:** Not yet. The existing code doesn't support live/streaming video as we inherit most of the codebase from the amazing SAM 2. Some discussion that you might be interested in: facebookresearch/sam2#90, facebookresearch/sam2#388 (comment).
**Question 3:** How to use SAMURAI in longer video?
**Answer 3:** See the discussion from sam2 https://github.com/facebookresearch/sam2/issues/264.
**Question 4:** How do you run the evaluation on the VOT benchmarks?
**Answer 4:** For LaSOT, LaSOT-ext, OTB, NFS please refer to the [issue 74](https://github.com/yangchris11/samurai/issues/74) for more details. For GOT-10k-test and TrackingNet, please refer to the official portal for submission.
## Acknowledgment
SAMURAI is built on top of [SAM 2](https://github.com/facebookresearch/sam2?tab=readme-ov-file) by Meta FAIR.
The VOT evaluation code is modifed from [VOT Toolkit](https://github.com/votchallenge/toolkit) by Luka Čehovin Zajc.
## Citation
Please consider citing our paper and the wonderful `SAM 2` if you found our work interesting and useful.
```
@article{ravi2024sam2,
title={SAM 2: Segment Anything in Images and Videos},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint arXiv:2408.00714},
url={https://arxiv.org/abs/2408.00714},
year={2024}
}
@misc{yang2024samurai,
title={SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory},
author={Cheng-Yen Yang and Hsiang-Wei Huang and Wenhao Chai and Zhongyu Jiang and Jenq-Neng Hwang},
year={2024},
eprint={2411.11922},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2411.11922},
}
``` | {
"source": "yangchris11/samurai",
"title": "README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 8255
} |
# Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to make participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies within all project spaces, and it also applies when
an individual is representing the project or its community in public spaces.
Examples of representing a project or community include using an official
project e-mail address, posting via an official social media account, or acting
as an appointed representative at an online or offline event. Representation of
a project may be further defined and clarified by project maintainers.
This Code of Conduct also applies outside the project spaces when there is a
reasonable belief that an individual's behavior may have a negative impact on
the project or its community.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at <[email protected]>. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq | {
"source": "yangchris11/samurai",
"title": "sam2/CODE_OF_CONDUCT.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/CODE_OF_CONDUCT.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 3540
} |
# Contributing to segment-anything
We want to make contributing to this project as easy and transparent as
possible.
## Pull Requests
We actively welcome your pull requests.
1. Fork the repo and create your branch from `main`.
2. If you've added code that should be tested, add tests.
3. If you've changed APIs, update the documentation.
4. Ensure the test suite passes.
5. Make sure your code lints, using the `ufmt format` command. Linting requires `black==24.2.0`, `usort==1.0.2`, and `ufmt==2.0.0b2`, which can be installed via `pip install -e ".[dev]"`.
6. If you haven't already, complete the Contributor License Agreement ("CLA").
## Contributor License Agreement ("CLA")
In order to accept your pull request, we need you to submit a CLA. You only need
to do this once to work on any of Facebook's open source projects.
Complete your CLA here: <https://code.facebook.com/cla>
## Issues
We use GitHub issues to track public bugs. Please ensure your description is
clear and has sufficient instructions to be able to reproduce the issue.
Facebook has a [bounty program](https://www.facebook.com/whitehat/) for the safe
disclosure of security bugs. In those cases, please go through the process
outlined on that page and do not file a public issue.
## License
By contributing to segment-anything, you agree that your contributions will be licensed
under the LICENSE file in the root directory of this source tree. | {
"source": "yangchris11/samurai",
"title": "sam2/CONTRIBUTING.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/CONTRIBUTING.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 1424
} |
## Installation
### Requirements
- Linux with Python ≥ 3.10, PyTorch ≥ 2.3.1 and [torchvision](https://github.com/pytorch/vision/) that matches the PyTorch installation. Install them together at https://pytorch.org to ensure this.
* Note older versions of Python or PyTorch may also work. However, the versions above are strongly recommended to provide all features such as `torch.compile`.
- [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. This should typically be CUDA 12.1 if you follow the default installation command.
- If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.
Then, install SAM 2 from the root of this repository via
```bash
pip install -e ".[notebooks]"
```
Note that you may skip building the SAM 2 CUDA extension during installation via environment variable `SAM2_BUILD_CUDA=0`, as follows:
```bash
# skip the SAM 2 CUDA extension
SAM2_BUILD_CUDA=0 pip install -e ".[notebooks]"
```
This would also skip the post-processing step at runtime (removing small holes and sprinkles in the output masks, which requires the CUDA extension), but shouldn't affect the results in most cases.
### Building the SAM 2 CUDA extension
By default, we allow the installation to proceed even if the SAM 2 CUDA extension fails to build. (In this case, the build errors are hidden unless using `-v` for verbose output in `pip install`.)
If you see a message like `Skipping the post-processing step due to the error above` at runtime or `Failed to build the SAM 2 CUDA extension due to the error above` during installation, it indicates that the SAM 2 CUDA extension failed to build in your environment. In this case, **you can still use SAM 2 for both image and video applications**. The post-processing step (removing small holes and sprinkles in the output masks) will be skipped, but this shouldn't affect the results in most cases.
If you would like to enable this post-processing step, you can reinstall SAM 2 on a GPU machine with environment variable `SAM2_BUILD_ALLOW_ERRORS=0` to force building the CUDA extension (and raise errors if it fails to build), as follows
```bash
pip uninstall -y SAM-2 && \
rm -f ./sam2/*.so && \
SAM2_BUILD_ALLOW_ERRORS=0 pip install -v -e ".[notebooks]"
```
Note that PyTorch needs to be installed first before building the SAM 2 CUDA extension. It's also necessary to install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) that match the CUDA version for your PyTorch installation. (This should typically be CUDA 12.1 if you follow the default installation command.) After installing the CUDA toolkits, you can check its version via `nvcc --version`.
Please check the section below on common installation issues if the CUDA extension fails to build during installation or load at runtime.
### Common Installation Issues
Click each issue for its solutions:
<details>
<summary>
I got `ImportError: cannot import name '_C' from 'sam2'`
</summary>
<br/>
This is usually because you haven't run the `pip install -e ".[notebooks]"` step above or the installation failed. Please install SAM 2 first, and see the other issues if your installation fails.
In some systems, you may need to run `python setup.py build_ext --inplace` in the SAM 2 repo root as suggested in https://github.com/facebookresearch/sam2/issues/77.
</details>
<details>
<summary>
I got `MissingConfigException: Cannot find primary config 'configs/sam2.1/sam2.1_hiera_l.yaml'`
</summary>
<br/>
This is usually because you haven't run the `pip install -e .` step above, so `sam2` isn't in your Python's `sys.path`. Please run this installation step. In case it still fails after the installation step, you may try manually adding the root of this repo to `PYTHONPATH` via
```bash
export SAM2_REPO_ROOT=/path/to/sam2 # path to this repo
export PYTHONPATH="${SAM2_REPO_ROOT}:${PYTHONPATH}"
```
to manually add `sam2_configs` into your Python's `sys.path`.
</details>
<details>
<summary>
I got `RuntimeError: Error(s) in loading state_dict for SAM2Base` when loading the new SAM 2.1 checkpoints
</summary>
<br/>
This is likely because you have installed a previous version of this repo, which doesn't have the new modules to support the SAM 2.1 checkpoints yet. Please try the following steps:
1. pull the latest code from the `main` branch of this repo
2. run `pip uninstall -y SAM-2` to uninstall any previous installations
3. then install the latest repo again using `pip install -e ".[notebooks]"`
In case the steps above still don't resolve the error, please try running in your Python environment the following
```python
from sam2.modeling import sam2_base
print(sam2_base.__file__)
```
and check whether the content in the printed local path of `sam2/modeling/sam2_base.py` matches the latest one in https://github.com/facebookresearch/sam2/blob/main/sam2/modeling/sam2_base.py (e.g. whether your local file has `no_obj_embed_spatial`) to indentify if you're still using a previous installation.
</details>
<details>
<summary>
My installation failed with `CUDA_HOME environment variable is not set`
</summary>
<br/>
This usually happens because the installation step cannot find the CUDA toolkits (that contain the NVCC compiler) to build a custom CUDA kernel in SAM 2. Please install [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) or the version that matches the CUDA version for your PyTorch installation. If the error persists after installing CUDA toolkits, you may explicitly specify `CUDA_HOME` via
```
export CUDA_HOME=/usr/local/cuda # change to your CUDA toolkit path
```
and rerun the installation.
Also, you should make sure
```
python -c 'import torch; from torch.utils.cpp_extension import CUDA_HOME; print(torch.cuda.is_available(), CUDA_HOME)'
```
print `(True, a directory with cuda)` to verify that the CUDA toolkits are correctly set up.
If you are still having problems after verifying that the CUDA toolkit is installed and the `CUDA_HOME` environment variable is set properly, you may have to add the `--no-build-isolation` flag to the pip command:
```
pip install --no-build-isolation -e .
```
</details>
<details>
<summary>
I got `undefined symbol: _ZN3c1015SmallVectorBaseIjE8grow_podEPKvmm` (or similar errors)
</summary>
<br/>
This usually happens because you have multiple versions of dependencies (PyTorch or CUDA) in your environment. During installation, the SAM 2 library is compiled against one version library while at run time it links against another version. This might be due to that you have different versions of PyTorch or CUDA installed separately via `pip` or `conda`. You may delete one of the duplicates to only keep a single PyTorch and CUDA version.
In particular, if you have a lower PyTorch version than 2.3.1, it's recommended to upgrade to PyTorch 2.3.1 or higher first. Otherwise, the installation script will try to upgrade to the latest PyTorch using `pip`, which could sometimes lead to duplicated PyTorch installation if you have previously installed another PyTorch version using `conda`.
We have been building SAM 2 against PyTorch 2.3.1 internally. However, a few user comments (e.g. https://github.com/facebookresearch/sam2/issues/22, https://github.com/facebookresearch/sam2/issues/14) suggested that downgrading to PyTorch 2.1.0 might resolve this problem. In case the error persists, you may try changing the restriction from `torch>=2.3.1` to `torch>=2.1.0` in both [`pyproject.toml`](pyproject.toml) and [`setup.py`](setup.py) to allow PyTorch 2.1.0.
</details>
<details>
<summary>
I got `CUDA error: no kernel image is available for execution on the device`
</summary>
<br/>
A possible cause could be that the CUDA kernel is somehow not compiled towards your GPU's CUDA [capability](https://developer.nvidia.com/cuda-gpus). This could happen if the installation is done in an environment different from the runtime (e.g. in a slurm system).
You can try pulling the latest code from the SAM 2 repo and running the following
```
export TORCH_CUDA_ARCH_LIST=9.0 8.0 8.6 8.9 7.0 7.2 7.5 6.0`
```
to manually specify the CUDA capability in the compilation target that matches your GPU.
</details>
<details>
<summary>
I got `RuntimeError: No available kernel. Aborting execution.` (or similar errors)
</summary>
<br/>
This is probably because your machine doesn't have a GPU or a compatible PyTorch version for Flash Attention (see also https://discuss.pytorch.org/t/using-f-scaled-dot-product-attention-gives-the-error-runtimeerror-no-available-kernel-aborting-execution/180900 for a discussion in PyTorch forum). You may be able to resolve this error by replacing the line
```python
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = get_sdpa_settings()
```
in [`sam2/modeling/sam/transformer.py`](sam2/modeling/sam/transformer.py) with
```python
OLD_GPU, USE_FLASH_ATTN, MATH_KERNEL_ON = True, True, True
```
to relax the attention kernel setting and use other kernels than Flash Attention.
</details>
<details>
<summary>
I got `Error compiling objects for extension`
</summary>
<br/>
You may see error log of:
> unsupported Microsoft Visual Studio version! Only the versions between 2017 and 2022 (inclusive) are supported! The nvcc flag '-allow-unsupported-compiler' can be used to override this version check; however, using an unsupported host compiler may cause compilation failure or incorrect run time execution. Use at your own risk.
This is probably because your versions of CUDA and Visual Studio are incompatible. (see also https://stackoverflow.com/questions/78515942/cuda-compatibility-with-visual-studio-2022-version-17-10 for a discussion in stackoverflow).<br>
You may be able to fix this by adding the `-allow-unsupported-compiler` argument to `nvcc` after L48 in the [setup.py](https://github.com/facebookresearch/sam2/blob/main/setup.py). <br>
After adding the argument, `get_extension()` will look like this:
```python
def get_extensions():
srcs = ["sam2/csrc/connected_components.cu"]
compile_args = {
"cxx": [],
"nvcc": [
"-DCUDA_HAS_FP16=1",
"-D__CUDA_NO_HALF_OPERATORS__",
"-D__CUDA_NO_HALF_CONVERSIONS__",
"-D__CUDA_NO_HALF2_OPERATORS__",
"-allow-unsupported-compiler" # Add this argument
],
}
ext_modules = [CUDAExtension("sam2._C", srcs, extra_compile_args=compile_args)]
return ext_modules
```
</details> | {
"source": "yangchris11/samurai",
"title": "sam2/INSTALL.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/INSTALL.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 10578
} |
# SAM 2: Segment Anything in Images and Videos
**[AI at Meta, FAIR](https://ai.meta.com/research/)**
[Nikhila Ravi](https://nikhilaravi.com/), [Valentin Gabeur](https://gabeur.github.io/), [Yuan-Ting Hu](https://scholar.google.com/citations?user=E8DVVYQAAAAJ&hl=en), [Ronghang Hu](https://ronghanghu.com/), [Chaitanya Ryali](https://scholar.google.com/citations?user=4LWx24UAAAAJ&hl=en), [Tengyu Ma](https://scholar.google.com/citations?user=VeTSl0wAAAAJ&hl=en), [Haitham Khedr](https://hkhedr.com/), [Roman Rädle](https://scholar.google.de/citations?user=Tpt57v0AAAAJ&hl=en), [Chloe Rolland](https://scholar.google.com/citations?hl=fr&user=n-SnMhoAAAAJ), [Laura Gustafson](https://scholar.google.com/citations?user=c8IpF9gAAAAJ&hl=en), [Eric Mintun](https://ericmintun.github.io/), [Junting Pan](https://junting.github.io/), [Kalyan Vasudev Alwala](https://scholar.google.co.in/citations?user=m34oaWEAAAAJ&hl=en), [Nicolas Carion](https://www.nicolascarion.com/), [Chao-Yuan Wu](https://chaoyuan.org/), [Ross Girshick](https://www.rossgirshick.info/), [Piotr Dollár](https://pdollar.github.io/), [Christoph Feichtenhofer](https://feichtenhofer.github.io/)
[[`Paper`](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)] [[`Project`](https://ai.meta.com/sam2)] [[`Demo`](https://sam2.metademolab.com/)] [[`Dataset`](https://ai.meta.com/datasets/segment-anything-video)] [[`Blog`](https://ai.meta.com/blog/segment-anything-2)] [[`BibTeX`](#citing-sam-2)]

**Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains.

## Latest updates
**09/30/2024 -- SAM 2.1 Developer Suite (new checkpoints, training code, web demo) is released**
- A new suite of improved model checkpoints (denoted as **SAM 2.1**) are released. See [Model Description](#model-description) for details.
* To use the new SAM 2.1 checkpoints, you need the latest model code from this repo. If you have installed an earlier version of this repo, please first uninstall the previous version via `pip uninstall SAM-2`, pull the latest code from this repo (with `git pull`), and then reinstall the repo following [Installation](#installation) below.
- The training (and fine-tuning) code has been released. See [`training/README.md`](training/README.md) on how to get started.
- The frontend + backend code for the SAM 2 web demo has been released. See [`demo/README.md`](demo/README.md) for details.
## Installation
SAM 2 needs to be installed first before use. The code requires `python>=3.10`, as well as `torch>=2.3.1` and `torchvision>=0.18.1`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. You can install SAM 2 on a GPU machine using:
```bash
git clone https://github.com/facebookresearch/sam2.git && cd sam2
pip install -e .
```
If you are installing on Windows, it's strongly recommended to use [Windows Subsystem for Linux (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install) with Ubuntu.
To use the SAM 2 predictor and run the example notebooks, `jupyter` and `matplotlib` are required and can be installed by:
```bash
pip install -e ".[notebooks]"
```
Note:
1. It's recommended to create a new Python environment via [Anaconda](https://www.anaconda.com/) for this installation and install PyTorch 2.3.1 (or higher) via `pip` following https://pytorch.org/. If you have a PyTorch version lower than 2.3.1 in your current environment, the installation command above will try to upgrade it to the latest PyTorch version using `pip`.
2. The step above requires compiling a custom CUDA kernel with the `nvcc` compiler. If it isn't already available on your machine, please install the [CUDA toolkits](https://developer.nvidia.com/cuda-toolkit-archive) with a version that matches your PyTorch CUDA version.
3. If you see a message like `Failed to build the SAM 2 CUDA extension` during installation, you can ignore it and still use SAM 2 (some post-processing functionality may be limited, but it doesn't affect the results in most cases).
Please see [`INSTALL.md`](./INSTALL.md) for FAQs on potential issues and solutions.
## Getting Started
### Download Checkpoints
First, we need to download a model checkpoint. All the model checkpoints can be downloaded by running:
```bash
cd checkpoints && \
./download_ckpts.sh && \
cd ..
```
or individually from:
- [sam2.1_hiera_tiny.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt)
- [sam2.1_hiera_small.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt)
- [sam2.1_hiera_base_plus.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt)
- [sam2.1_hiera_large.pt](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt)
(note that these are the improved checkpoints denoted as SAM 2.1; see [Model Description](#model-description) for details.)
Then SAM 2 can be used in a few lines as follows for image and video prediction.
### Image prediction
SAM 2 has all the capabilities of [SAM](https://github.com/facebookresearch/segment-anything) on static images, and we provide image prediction APIs that closely resemble SAM for image use cases. The `SAM2ImagePredictor` class has an easy interface for image prompting.
```python
import torch
from sam2.build_sam import build_sam2
from sam2.sam2_image_predictor import SAM2ImagePredictor
checkpoint = "./checkpoints/sam2.1_hiera_large.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml"
predictor = SAM2ImagePredictor(build_sam2(model_cfg, checkpoint))
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
predictor.set_image(<your_image>)
masks, _, _ = predictor.predict(<input_prompts>)
```
Please refer to the examples in [image_predictor_example.ipynb](./notebooks/image_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/image_predictor_example.ipynb)) for static image use cases.
SAM 2 also supports automatic mask generation on images just like SAM. Please see [automatic_mask_generator_example.ipynb](./notebooks/automatic_mask_generator_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/automatic_mask_generator_example.ipynb)) for automatic mask generation in images.
### Video prediction
For promptable segmentation and tracking in videos, we provide a video predictor with APIs for example to add prompts and propagate masklets throughout a video. SAM 2 supports video inference on multiple objects and uses an inference state to keep track of the interactions in each video.
```python
import torch
from sam2.build_sam import build_sam2_video_predictor
checkpoint = "./checkpoints/sam2.1_hiera_large.pt"
model_cfg = "configs/sam2.1/sam2.1_hiera_l.yaml"
predictor = build_sam2_video_predictor(model_cfg, checkpoint)
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
state = predictor.init_state(<your_video>)
# add new prompts and instantly get the output on the same frame
frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, <your_prompts>):
# propagate the prompts to get masklets throughout the video
for frame_idx, object_ids, masks in predictor.propagate_in_video(state):
...
```
Please refer to the examples in [video_predictor_example.ipynb](./notebooks/video_predictor_example.ipynb) (also in Colab [here](https://colab.research.google.com/github/facebookresearch/sam2/blob/main/notebooks/video_predictor_example.ipynb)) for details on how to add click or box prompts, make refinements, and track multiple objects in videos.
## Load from 🤗 Hugging Face
Alternatively, models can also be loaded from [Hugging Face](https://huggingface.co/models?search=facebook/sam2) (requires `pip install huggingface_hub`).
For image prediction:
```python
import torch
from sam2.sam2_image_predictor import SAM2ImagePredictor
predictor = SAM2ImagePredictor.from_pretrained("facebook/sam2-hiera-large")
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
predictor.set_image(<your_image>)
masks, _, _ = predictor.predict(<input_prompts>)
```
For video prediction:
```python
import torch
from sam2.sam2_video_predictor import SAM2VideoPredictor
predictor = SAM2VideoPredictor.from_pretrained("facebook/sam2-hiera-large")
with torch.inference_mode(), torch.autocast("cuda", dtype=torch.bfloat16):
state = predictor.init_state(<your_video>)
# add new prompts and instantly get the output on the same frame
frame_idx, object_ids, masks = predictor.add_new_points_or_box(state, <your_prompts>):
# propagate the prompts to get masklets throughout the video
for frame_idx, object_ids, masks in predictor.propagate_in_video(state):
...
```
## Model Description
### SAM 2.1 checkpoints
The table below shows the improved SAM 2.1 checkpoints released on September 29, 2024.
| **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** |
| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |
| sam2.1_hiera_tiny <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_tiny.pt)) | 38.9 | 47.2 | 76.5 | 71.8 | 77.3 |
| sam2.1_hiera_small <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\*) | 76.6 | 73.5 | 78.3 |
| sam2.1_hiera_base_plus <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\*) | 78.2 | 73.7 | 78.2 |
| sam2.1_hiera_large <br /> ([config](sam2/configs/sam2.1/sam2.1_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\*) | 79.5 | 74.6 | 80.6 |
### SAM 2 checkpoints
The previous SAM 2 checkpoints released on July 29, 2024 can be found as follows:
| **Model** | **Size (M)** | **Speed (FPS)** | **SA-V test (J&F)** | **MOSE val (J&F)** | **LVOS v2 (J&F)** |
| :------------------: | :----------: | :--------------------: | :-----------------: | :----------------: | :---------------: |
| sam2_hiera_tiny <br /> ([config](sam2/configs/sam2/sam2_hiera_t.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_tiny.pt)) | 38.9 | 47.2 | 75.0 | 70.9 | 75.3 |
| sam2_hiera_small <br /> ([config](sam2/configs/sam2/sam2_hiera_s.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_small.pt)) | 46 | 43.3 (53.0 compiled\*) | 74.9 | 71.5 | 76.4 |
| sam2_hiera_base_plus <br /> ([config](sam2/configs/sam2/sam2_hiera_b+.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_base_plus.pt)) | 80.8 | 34.8 (43.8 compiled\*) | 74.7 | 72.8 | 75.8 |
| sam2_hiera_large <br /> ([config](sam2/configs/sam2/sam2_hiera_l.yaml), [checkpoint](https://dl.fbaipublicfiles.com/segment_anything_2/072824/sam2_hiera_large.pt)) | 224.4 | 24.2 (30.2 compiled\*) | 76.0 | 74.6 | 79.8 |
\* Compile the model by setting `compile_image_encoder: True` in the config.
## Segment Anything Video Dataset
See [sav_dataset/README.md](sav_dataset/README.md) for details.
## Training SAM 2
You can train or fine-tune SAM 2 on custom datasets of images, videos, or both. Please check the training [README](training/README.md) on how to get started.
## Web demo for SAM 2
We have released the frontend + backend code for the SAM 2 web demo (a locally deployable version similar to https://sam2.metademolab.com/demo). Please see the web demo [README](demo/README.md) for details.
## License
The SAM 2 model checkpoints, SAM 2 demo code (front-end and back-end), and SAM 2 training code are licensed under [Apache 2.0](./LICENSE), however the [Inter Font](https://github.com/rsms/inter?tab=OFL-1.1-1-ov-file) and [Noto Color Emoji](https://github.com/googlefonts/noto-emoji) used in the SAM 2 demo code are made available under the [SIL Open Font License, version 1.1](https://openfontlicense.org/open-font-license-official-text/).
## Contributing
See [contributing](CONTRIBUTING.md) and the [code of conduct](CODE_OF_CONDUCT.md).
## Contributors
The SAM 2 project was made possible with the help of many contributors (alphabetical):
Karen Bergan, Daniel Bolya, Alex Bosenberg, Kai Brown, Vispi Cassod, Christopher Chedeau, Ida Cheng, Luc Dahlin, Shoubhik Debnath, Rene Martinez Doehner, Grant Gardner, Sahir Gomez, Rishi Godugu, Baishan Guo, Caleb Ho, Andrew Huang, Somya Jain, Bob Kamma, Amanda Kallet, Jake Kinney, Alexander Kirillov, Shiva Koduvayur, Devansh Kukreja, Robert Kuo, Aohan Lin, Parth Malani, Jitendra Malik, Mallika Malhotra, Miguel Martin, Alexander Miller, Sasha Mitts, William Ngan, George Orlin, Joelle Pineau, Kate Saenko, Rodrick Shepard, Azita Shokrpour, David Soofian, Jonathan Torres, Jenny Truong, Sagar Vaze, Meng Wang, Claudette Ward, Pengchuan Zhang.
Third-party code: we use a GPU-based connected component algorithm adapted from [`cc_torch`](https://github.com/zsef123/Connected_components_PyTorch) (with its license in [`LICENSE_cctorch`](./LICENSE_cctorch)) as an optional post-processing step for the mask predictions.
## Citing SAM 2
If you use SAM 2 or the SA-V dataset in your research, please use the following BibTeX entry.
```bibtex
@article{ravi2024sam2,
title={SAM 2: Segment Anything in Images and Videos},
author={Ravi, Nikhila and Gabeur, Valentin and Hu, Yuan-Ting and Hu, Ronghang and Ryali, Chaitanya and Ma, Tengyu and Khedr, Haitham and R{\"a}dle, Roman and Rolland, Chloe and Gustafson, Laura and Mintun, Eric and Pan, Junting and Alwala, Kalyan Vasudev and Carion, Nicolas and Wu, Chao-Yuan and Girshick, Ross and Doll{\'a}r, Piotr and Feichtenhofer, Christoph},
journal={arXiv preprint arXiv:2408.00714},
url={https://arxiv.org/abs/2408.00714},
year={2024}
}
``` | {
"source": "yangchris11/samurai",
"title": "sam2/README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 15439
} |
# SAM 2 Demo
Welcome to the SAM 2 Demo! This project consists of a frontend built with React TypeScript and Vite and a backend service using Python Flask and Strawberry GraphQL. Both components can be run in Docker containers or locally on MPS (Metal Performance Shaders) or CPU. However, running the backend service on MPS or CPU devices may result in significantly slower performance (FPS).
## Prerequisites
Before you begin, ensure you have the following installed on your system:
- Docker and Docker Compose
- [OPTIONAL] Node.js and Yarn for running frontend locally
- [OPTIONAL] Anaconda for running backend locally
### Installing Docker
To install Docker, follow these steps:
1. Go to the [Docker website](https://www.docker.com/get-started)
2. Follow the installation instructions for your operating system.
### [OPTIONAL] Installing Node.js and Yarn
To install Node.js and Yarn, follow these steps:
1. Go to the [Node.js website](https://nodejs.org/en/download/).
2. Follow the installation instructions for your operating system.
3. Once Node.js is installed, open a terminal or command prompt and run the following command to install Yarn:
```
npm install -g yarn
```
### [OPTIONAL] Installing Anaconda
To install Anaconda, follow these steps:
1. Go to the [Anaconda website](https://www.anaconda.com/products/distribution).
2. Follow the installation instructions for your operating system.
## Quick Start
To get both the frontend and backend running quickly using Docker, you can use the following command:
```bash
docker compose up --build
```
> [!WARNING]
> On macOS, Docker containers only support running on CPU. MPS is not supported through Docker. If you want to run the demo backend service on MPS, you will need to run it locally (see "Running the Backend Locally" below).
This will build and start both services. You can access them at:
- **Frontend:** [http://localhost:7262](http://localhost:7262)
- **Backend:** [http://localhost:7263/graphql](http://localhost:7263/graphql)
## Running Backend with MPS Support
MPS (Metal Performance Shaders) is not supported with Docker. To use MPS, you need to run the backend on your local machine.
### Setting Up Your Environment
1. **Create Conda environment**
Create a new Conda environment for this project by running the following command or use your existing conda environment for SAM 2:
```
conda create --name sam2-demo python=3.10 --yes
```
This will create a new environment named `sam2-demo` with Python 3.10 as the interpreter.
2. **Activate the Conda environment:**
```bash
conda activate sam2-demo
```
3. **Install ffmpeg**
```bash
conda install -c conda-forge ffmpeg
```
4. **Install SAM 2 demo dependencies:**
Install project dependencies by running the following command in the SAM 2 checkout root directory:
```bash
pip install -e '.[interactive-demo]'
```
### Running the Backend Locally
Download the SAM 2 checkpoints:
```bash
(cd ./checkpoints && ./download_ckpts.sh)
```
Use the following command to start the backend with MPS support:
```bash
cd demo/backend/server/
```
```bash
PYTORCH_ENABLE_MPS_FALLBACK=1 \
APP_ROOT="$(pwd)/../../../" \
APP_URL=http://localhost:7263 \
MODEL_SIZE=base_plus \
DATA_PATH="$(pwd)/../../data" \
DEFAULT_VIDEO_PATH=gallery/05_default_juggle.mp4 \
gunicorn \
--worker-class gthread app:app \
--workers 1 \
--threads 2 \
--bind 0.0.0.0:7263 \
--timeout 60
```
Options for the `MODEL_SIZE` argument are "tiny", "small", "base_plus" (default), and "large".
> [!WARNING]
> Running the backend service on MPS devices can cause fatal crashes with the Gunicorn worker due to insufficient MPS memory. Try switching to CPU devices by setting the `SAM2_DEMO_FORCE_CPU_DEVICE=1` environment variable.
### Starting the Frontend
If you wish to run the frontend separately (useful for development), follow these steps:
1. **Navigate to demo frontend directory:**
```bash
cd demo/frontend
```
2. **Install dependencies:**
```bash
yarn install
```
3. **Start the development server:**
```bash
yarn dev --port 7262
```
This will start the frontend development server on [http://localhost:7262](http://localhost:7262).
## Docker Tips
- To rebuild the Docker containers (useful if you've made changes to the Dockerfile or dependencies):
```bash
docker compose up --build
```
- To stop the Docker containers:
```bash
docker compose down
```
## Contributing
Contributions are welcome! Please read our contributing guidelines to get started.
## License
See the LICENSE file for details.
---
By following these instructions, you should have a fully functional development environment for both the frontend and backend of the SAM 2 Demo. Happy coding! | {
"source": "yangchris11/samurai",
"title": "sam2/demo/README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/demo/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 4796
} |
# Segment Anything Video (SA-V) Dataset
## Overview
[Segment Anything Video (SA-V)](https://ai.meta.com/datasets/segment-anything-video/), consists of 51K diverse videos and 643K high-quality spatio-temporal segmentation masks (i.e., masklets). The dataset is released under the CC by 4.0 license. Browse the dataset [here](https://sam2.metademolab.com/dataset).

## Getting Started
### Download the dataset
Visit [here](https://ai.meta.com/datasets/segment-anything-video-downloads/) to download SA-V including the training, val and test sets.
### Dataset Stats
| | Num Videos | Num Masklets |
| ---------- | ---------- | ----------------------------------------- |
| SA-V train | 50,583 | 642,036 (auto 451,720 and manual 190,316) |
| SA-V val | 155 | 293 |
| SA-V test | 150 | 278 |
### Notebooks
To load and visualize the SA-V training set annotations, refer to the example [sav_visualization_example.ipynb](./sav_visualization_example.ipynb) notebook.
### SA-V train
For SA-V training set we release the mp4 videos and store the masklet annotations per video as json files . Automatic masklets and manual masklets are stored separately as two json files: `{video_id}_auto.json` and `{video_id}_manual.json`. They can be loaded as dictionaries in python in the format below.
```
{
"video_id" : str; video id
"video_duration" : float64; the duration in seconds of this video
"video_frame_count" : float64; the number of frames in the video
"video_height" : float64; the height of the video
"video_width" : float64; the width of the video
"video_resolution" : float64; video_height $\times$ video_width
"video_environment" : List[str]; "Indoor" or "Outdoor"
"video_split" : str; "train" for training set
"masklet" : List[List[Dict]]; masklet annotations in list of list of RLEs.
The outer list is over frames in the video and the inner list
is over objects in the video.
"masklet_id" : List[int]; the masklet ids
"masklet_size_rel" : List[float]; the average mask area normalized by resolution
across all the frames where the object is visible
"masklet_size_abs" : List[float]; the average mask area (in pixels)
across all the frames where the object is visible
"masklet_size_bucket" : List[str]; "small": $1$ <= masklet_size_abs < $32^2$,
"medium": $32^2$ <= masklet_size_abs < $96^2$,
and "large": masklet_size_abs > $96^2$
"masklet_visibility_changes" : List[int]; the number of times where the visibility changes
after the first appearance (e.g., invisible -> visible
or visible -> invisible)
"masklet_first_appeared_frame" : List[int]; the index of the frame where the object appears
the first time in the video. Always 0 for auto masklets.
"masklet_frame_count" : List[int]; the number of frames being annotated. Note that
videos are annotated at 6 fps (annotated every 4 frames)
while the videos are at 24 fps.
"masklet_edited_frame_count" : List[int]; the number of frames being edited by human annotators.
Always 0 for auto masklets.
"masklet_type" : List[str]; "auto" or "manual"
"masklet_stability_score" : Optional[List[List[float]]]; per-mask stability scores. Auto annotation only.
"masklet_num" : int; the number of manual/auto masklets in the video
}
```
Note that in SA-V train, there are in total 50,583 videos where all of them have manual annotations. Among the 50,583 videos there are 48,436 videos that also have automatic annotations.
### SA-V val and test
For SA-V val and test sets, we release the extracted frames as jpeg files, and the masks as png files with the following directory structure:
```
sav_val(sav_test)
├── sav_val.txt (sav_test.txt): a list of video ids in the split
├── JPEGImages_24fps # videos are extracted at 24 fps
│ ├── {video_id}
│ │ ├── 00000.jpg # video frame
│ │ ├── 00001.jpg # video frame
│ │ ├── 00002.jpg # video frame
│ │ ├── 00003.jpg # video frame
│ │ └── ...
│ ├── {video_id}
│ ├── {video_id}
│ └── ...
└── Annotations_6fps # videos are annotated at 6 fps
├── {video_id}
│ ├── 000 # obj 000
│ │ ├── 00000.png # mask for object 000 in 00000.jpg
│ │ ├── 00004.png # mask for object 000 in 00004.jpg
│ │ ├── 00008.png # mask for object 000 in 00008.jpg
│ │ ├── 00012.png # mask for object 000 in 00012.jpg
│ │ └── ...
│ ├── 001 # obj 001
│ ├── 002 # obj 002
│ └── ...
├── {video_id}
├── {video_id}
└── ...
```
All masklets in val and test sets are manually annotated in every frame by annotators. For each annotated object in a video, we store the annotated masks in a single png. This is because the annotated objects may overlap, e.g., it is possible in our SA-V dataset for there to be a mask for the whole person as well as a separate mask for their hands.
## SA-V Val and Test Evaluation
We provide an evaluator to compute the common J and F metrics on SA-V val and test sets. To run the evaluation, we need to first install a few dependencies as follows:
```
pip install -r requirements.txt
```
Then we can evaluate the predictions as follows:
```
python sav_evaluator.py --gt_root {GT_ROOT} --pred_root {PRED_ROOT}
```
or run
```
python sav_evaluator.py --help
```
to print a complete help message.
The evaluator expects the `GT_ROOT` to be one of the following folder structures, and `GT_ROOT` and `PRED_ROOT` to have the same structure.
- Same as SA-V val and test directory structure
```
{GT_ROOT} # gt root folder
├── {video_id}
│ ├── 000 # all masks associated with obj 000
│ │ ├── 00000.png # mask for object 000 in frame 00000 (binary mask)
│ │ └── ...
│ ├── 001 # all masks associated with obj 001
│ ├── 002 # all masks associated with obj 002
│ └── ...
├── {video_id}
├── {video_id}
└── ...
```
In the paper for the experiments on SA-V val and test, we run inference on the 24 fps videos, and evaluate on the subset of frames where we have ground truth annotations (first and last annotated frames dropped). The evaluator will ignore the masks in frames where we don't have ground truth annotations.
- Same as [DAVIS](https://github.com/davisvideochallenge/davis2017-evaluation) directory structure
```
{GT_ROOT} # gt root folder
├── {video_id}
│ ├── 00000.png # annotations in frame 00000 (may contain multiple objects)
│ └── ...
├── {video_id}
├── {video_id}
└── ...
```
## License
The evaluation code is licensed under the [BSD 3 license](./LICENSE). Please refer to the paper for more details on the models. The videos and annotations in SA-V Dataset are released under CC BY 4.0.
Third-party code: the evaluation software is heavily adapted from [`VOS-Benchmark`](https://github.com/hkchengrex/vos-benchmark) and [`DAVIS`](https://github.com/davisvideochallenge/davis2017-evaluation) (with their licenses in [`LICENSE_DAVIS`](./LICENSE_DAVIS) and [`LICENSE_VOS_BENCHMARK`](./LICENSE_VOS_BENCHMARK)). | {
"source": "yangchris11/samurai",
"title": "sam2/sav_dataset/README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/sav_dataset/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 8081
} |
## SAM 2 toolkits
This directory provides toolkits for additional SAM 2 use cases.
### Semi-supervised VOS inference
The `vos_inference.py` script can be used to generate predictions for semi-supervised video object segmentation (VOS) evaluation on datasets such as [DAVIS](https://davischallenge.org/index.html), [MOSE](https://henghuiding.github.io/MOSE/) or the SA-V dataset.
After installing SAM 2 and its dependencies, it can be used as follows ([DAVIS 2017 dataset](https://davischallenge.org/davis2017/code.html) as an example). This script saves the prediction PNG files to the `--output_mask_dir`.
```bash
python ./tools/vos_inference.py \
--sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \
--sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \
--base_video_dir /path-to-davis-2017/JPEGImages/480p \
--input_mask_dir /path-to-davis-2017/Annotations/480p \
--video_list_file /path-to-davis-2017/ImageSets/2017/val.txt \
--output_mask_dir ./outputs/davis_2017_pred_pngs
```
(replace `/path-to-davis-2017` with the path to DAVIS 2017 dataset)
To evaluate on the SA-V dataset with per-object PNG files for the object masks, we need to **add the `--per_obj_png_file` flag** as follows (using SA-V val as an example). This script will also save per-object PNG files for the output masks under the `--per_obj_png_file` flag.
```bash
python ./tools/vos_inference.py \
--sam2_cfg configs/sam2.1/sam2.1_hiera_b+.yaml \
--sam2_checkpoint ./checkpoints/sam2.1_hiera_base_plus.pt \
--base_video_dir /path-to-sav-val/JPEGImages_24fps \
--input_mask_dir /path-to-sav-val/Annotations_6fps \
--video_list_file /path-to-sav-val/sav_val.txt \
--per_obj_png_file \
--output_mask_dir ./outputs/sav_val_pred_pngs
```
(replace `/path-to-sav-val` with the path to SA-V val)
Then, we can use the evaluation tools or servers for each dataset to get the performance of the prediction PNG files above.
Note: by default, the `vos_inference.py` script above assumes that all objects to track already appear on frame 0 in each video (as is the case in DAVIS, MOSE or SA-V). **For VOS datasets that don't have all objects to track appearing in the first frame (such as LVOS or YouTube-VOS), please add the `--track_object_appearing_later_in_video` flag when using `vos_inference.py`**.
### SAMURAI VOS inference
```bash
python ./tools/vos_inference.py \
--sam2_cfg configs/samurai/sam2.1_hiera_l.yaml \
--sam2_checkpoint ./checkpoints/sam2.1_hiera_large.pt \
--base_video_dir /path-to-sav-val-or-sav-test/JPEGImages_24fps/ \
--input_mask_dir /path-to-sav-val-or-sav-test/Annotations_6fps \
--video_list_file /path-to-sav-val-or-sav-test/sav_val.txt \ # or sav_test.txt
--per_obj_png_file \
--output_mask_dir /path-to-save-results/ \
--track_object_appearing_later_in_video
``` | {
"source": "yangchris11/samurai",
"title": "sam2/tools/README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/tools/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 2808
} |
# Training Code for SAM 2
This folder contains the training code for SAM 2, a foundation model for promptable visual segmentation in images and videos.
The code allows users to train and fine-tune SAM 2 on their own datasets (image, video, or both).
## Structure
The training code is organized into the following subfolders:
* `dataset`: This folder contains image and video dataset and dataloader classes as well as their transforms.
* `model`: This folder contains the main model class (`SAM2Train`) for training/fine-tuning. `SAM2Train` inherits from `SAM2Base` model and provides functions to enable training or fine-tuning SAM 2. It also accepts all training-time parameters used for simulating user prompts (e.g. iterative point sampling).
* `utils`: This folder contains training utils such as loggers and distributed training utils.
* `scripts`: This folder contains the script to extract the frames of SA-V dataset to be used in training.
* `loss_fns.py`: This file has the main loss class (`MultiStepMultiMasksAndIous`) used for training.
* `optimizer.py`: This file contains all optimizer utils that support arbitrary schedulers.
* `trainer.py`: This file contains the `Trainer` class that accepts all the `Hydra` configurable modules (model, optimizer, datasets, etc..) and implements the main train/eval loop.
* `train.py`: This script is used to launch training jobs. It supports single and multi-node jobs. For usage, please check the [Getting Started](README.md#getting-started) section or run `python training/train.py -h`
## Getting Started
To get started with the training code, we provide a simple example to fine-tune our checkpoints on [MOSE](https://henghuiding.github.io/MOSE/) dataset, which can be extended to your custom datasets.
#### Requirements:
- We assume training on A100 GPUs with **80 GB** of memory.
- Download the MOSE dataset using one of the provided links from [here](https://github.com/henghuiding/MOSE-api?tab=readme-ov-file#download).
#### Steps to fine-tune on MOSE:
- Install the packages required for training by running `pip install -e ".[dev]"`.
- Set the paths for MOSE dataset in `configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml`.
```yaml
dataset:
# PATHS to Dataset
img_folder: null # PATH to MOSE JPEGImages folder
gt_folder: null # PATH to MOSE Annotations folder
file_list_txt: null # Optional PATH to filelist containing a subset of videos to be used for training
```
- To fine-tune the base model on MOSE using 8 GPUs, run
```python
python training/train.py \
-c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \
--use-cluster 0 \
--num-gpus 8
```
We also support multi-node training on a cluster using [SLURM](https://slurm.schedmd.com/documentation.html), for example, you can train on 2 nodes by running
```python
python training/train.py \
-c configs/sam2.1_training/sam2.1_hiera_b+_MOSE_finetune.yaml \
--use-cluster 1 \
--num-gpus 8 \
--num-nodes 2
--partition $PARTITION \
--qos $QOS \
--account $ACCOUNT
```
where partition, qos, and account are optional and depend on your SLURM configuration.
By default, the checkpoint and logs will be saved under `sam2_logs` directory in the root of the repo. Alternatively, you can set the experiment log directory in the config file as follows:
```yaml
experiment_log_dir: null # Path to log directory, defaults to ./sam2_logs/${config_name}
```
The training losses can be monitored using `tensorboard` logs stored under `tensorboard/` in the experiment log directory. We also provide a sample validation [split]( ../training/assets/MOSE_sample_val_list.txt) for evaluation purposes. To generate predictions, follow this [guide](../tools/README.md) on how to use our `vos_inference.py` script. After generating the predictions, you can run the `sav_evaluator.py` as detailed [here](../sav_dataset/README.md#sa-v-val-and-test-evaluation). The expected MOSE J&F after fine-tuning the Base plus model is 79.4.
After training/fine-tuning, you can then use the new checkpoint (saved in `checkpoints/` in the experiment log directory) similar to SAM 2 released checkpoints (as illustrated [here](../README.md#image-prediction)).
## Training on images and videos
The code supports training on images and videos (similar to how SAM 2 is trained). We provide classes for loading SA-1B as a sample image dataset, SA-V as a sample video dataset, as well as any DAVIS-style video dataset (e.g. MOSE). Note that to train on SA-V, you must first extract all videos to JPEG frames using the provided extraction [script](./scripts/sav_frame_extraction_submitit.py). Below is an example of how to setup the datasets in your config to train on a mix of image and video datasets:
```yaml
data:
train:
_target_: training.dataset.sam2_datasets.TorchTrainMixedDataset
phases_per_epoch: ${phases_per_epoch} # Chunks a single epoch into smaller phases
batch_sizes: # List of batch sizes corresponding to each dataset
- ${bs1} # Batch size of dataset 1
- ${bs2} # Batch size of dataset 2
datasets:
# SA1B as an example of an image dataset
- _target_: training.dataset.vos_dataset.VOSDataset
training: true
video_dataset:
_target_: training.dataset.vos_raw_dataset.SA1BRawDataset
img_folder: ${path_to_img_folder}
gt_folder: ${path_to_gt_folder}
file_list_txt: ${path_to_train_filelist} # Optional
sampler:
_target_: training.dataset.vos_sampler.RandomUniformSampler
num_frames: 1
max_num_objects: ${max_num_objects_per_image}
transforms: ${image_transforms}
# SA-V as an example of a video dataset
- _target_: training.dataset.vos_dataset.VOSDataset
training: true
video_dataset:
_target_: training.dataset.vos_raw_dataset.JSONRawDataset
img_folder: ${path_to_img_folder}
gt_folder: ${path_to_gt_folder}
file_list_txt: ${path_to_train_filelist} # Optional
ann_every: 4
sampler:
_target_: training.dataset.vos_sampler.RandomUniformSampler
num_frames: 8 # Number of frames per video
max_num_objects: ${max_num_objects_per_video}
reverse_time_prob: ${reverse_time_prob} # probability to reverse video
transforms: ${video_transforms}
shuffle: True
num_workers: ${num_train_workers}
pin_memory: True
drop_last: True
collate_fn:
_target_: training.utils.data_utils.collate_fn
_partial_: true
dict_key: all
``` | {
"source": "yangchris11/samurai",
"title": "sam2/training/README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/sam2/training/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 6666
} |
# README
## Description for different text files
GOT10K
- got10k_train_full_split.txt: the complete GOT-10K training set. (9335 videos)
- got10k_train_split.txt: part of videos from the GOT-10K training set
- got10k_val_split.txt: another part of videos from the GOT-10K training set
- got10k_vot_exclude.txt: 1k videos that are forbidden from "using to train models then testing on VOT" (as required by [VOT Challenge](https://www.votchallenge.net/vot2020/participation.html))
- got10k_vot_train_split.txt: part of videos from the "VOT-permitted" GOT-10K training set
- got10k_vot_val_split.txt: another part of videos from the "VOT-permitted" GOT-10K training set
LaSOT
- lasot_train_split.txt: the complete LaSOT training set
TrackingNnet
- trackingnet_classmap.txt: The map from the sequence name to the target class for the TrackingNet | {
"source": "yangchris11/samurai",
"title": "lib/train/data_specs/README.md",
"url": "https://github.com/yangchris11/samurai/blob/master/lib/train/data_specs/README.md",
"date": "2024-11-06T22:46:05",
"stars": 6541,
"description": "Official repository of \"SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory\"",
"file_size": 843
} |
# Awesome LLM Strawberry (OpenAI o1)
[](https://github.com/sindresorhus/awesome)   [](https://github.com/hijkzzz/Awesome-LLM-Strawberry/blob/main/LICENSE)
This is a collection of research papers & blogs for **OpenAI Strawberry(o1) and Reasoning**.
And the repository will be continuously updated to track the frontier of LLM Reasoning.
## OpenAI Docs
- [https://platform.openai.com/docs/guides/reasoning](https://platform.openai.com/docs/guides/reasoning)
- <img src="https://github.com/user-attachments/assets/b165cb20-9202-4951-8783-6b2f7e0d6071" width="600px">
## News
- [OpenAI] [Introducing deep research](https://openai.com/index/introducing-deep-research/)
- [OpenAI] [o3 preview & o3 mini](https://openai.com/12-days/)
- [OpenAI] [Introducing ChatGPT Pro](https://openai.com/index/introducing-chatgpt-pro/)
- [Google DeepMind] [Gemini 2.0 Flash Thinking](https://x.com/JeffDean/status/1869789813232341267)
- [Ilya Sutskever] [AI with reasoning power will be less predictable](https://www.reuters.com/technology/artificial-intelligence/ai-with-reasoning-power-will-be-less-predictable-ilya-sutskever-says-2024-12-14/)
- [SemianAlysis] [Scaling Laws – O1 Pro Architecture, Reasoning Training Infrastructure, Orion and Claude 3.5 Opus “Failures” ](https://semianalysis.com/2024/12/11/scaling-laws-o1-pro-architecture-reasoning-training-infrastructure-orion-and-claude-3-5-opus-failures/)
- [DeepSeek] [DeepSeek-R1-Lite-Preview is now live: unleashing supercharged reasoning power!](https://api-docs.deepseek.com/news/news1120)
- [Moonshoot] [数学对标o1系列,搜索再次进化,Kimi 新推理模型与你一起拓展智能边界](https://mp.weixin.qq.com/s/g4DltigncX-4sfaQ6Qn1zA)
- [Moonshoot] [Kimi 发布视觉思考模型 k1,多项理科测试行业领先](https://mp.weixin.qq.com/s/8cip3dehL8OIfZSnbZ1ftQ)
- [InternLM] [强推理模型书生InternThinker开放体验:自主生成高智力密度数据、具备元动作思考能力](https://mp.weixin.qq.com/s/l7fdHlETvhKgZmUl23EiRA)
- [新智元] [万字独家爆光,首揭o1 pro架构!惊人反转,Claude 3.5 Opus没失败?](https://mp.weixin.qq.com/s/LozJEE1sAAYAOrEFDVb6mg)
## Blogs
- [OpenAI] [Learning to Reason with LLMs](https://openai.com/index/learning-to-reason-with-llms/)
- [OpenAI] [OpenAI o1-mini Advancing cost-efficient reasoning](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning)
- [OpenAI] [Finding GPT-4’s mistakes with GPT-4](https://openai.com/index/finding-gpt4s-mistakes-with-gpt-4/)
- [ARC-AGI] [OpenAI o3 Breakthrough High Score on ARC-AGI-Pub](https://arcprize.org/blog/oai-o3-pub-breakthrough)
- [Anthropic] [Building effective agents](https://www.anthropic.com/research/building-effective-agents)
- [Tibor Blaho] [Summary of what we have learned during AMA hour with the OpenAI o1 team](https://twitter-thread.com/t/1834686946846597281)
- [hijkzzz] [Exploring OpenAI O1 Model Replication](https://hijkzzz.notion.site/exploring-openai-o1-model-replication?pvs=74)
- [hijkzzz] [A Survey of Reinforcement Learning from Human Feedback (RLHF)](https://hijkzzz.notion.site/a-survey-of-rlhf?pvs=74)
- [Nathan Lambert] [OpenAI’s Strawberry, LM self-talk, inference scaling laws, and spending more on inference](https://www.interconnects.ai/p/openai-strawberry-and-inference-scaling-laws)
- [Nathan Lambert] [Reverse engineering OpenAI’s o1](https://www.interconnects.ai/p/reverse-engineering-openai-o1)
- [Andreas Stuhlmüller, jungofthewon] [Supervise Process, not Outcomes](https://www.alignmentforum.org/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes)
- [Nouha Dziri] [Have o1 Models Cracked Human Reasoning?](https://substack.com/home/post/p-148782195)
- [Rishabh Agarwal] [Improving LLM Reasoning using Self-generated data: RL and Verifiers](https://rosanneliu.com/dlctfs/dlct_240531.pdf)
- [Wei Shen] [Generalization Progress in RLHF: Insights into the Impact of Reward Models and PPO](https://swtheking.notion.site/4e0cbb325aaf458da710f0b36dbb239c?v=c9231e8c988b4d66a1d2dc34df4cf7b5)
- [Dominater069] [Codeforces - Analyzing how good O1-Mini actually is](https://codeforces.com/blog/entry/133887)
- [hijkzzz] [o1 复现以及关于 REINFORCE & GRPO 的碎碎念](https://zhuanlan.zhihu.com/p/14520940716)
## Talks
- [Noam Brown] [Parables on the Power of Planning in AI: From Poker to Diplomacy](https://www.youtube.com/watch?app=desktop&v=eaAonE58sLU)
- [Noam Brown] [OpenAI o1 and Teaching LLMs to Reason Better](https://www.youtube.com/watch?v=jPluSXJpdrA&t=1669s)
- [Hyung Won Chung] [Don't teach. Incentivize.](https://www.youtube.com/watch?v=kYWUEV_e2ss)
## Courses
- [DeepLearning.AI] [Reasoning with o1](https://learn.deeplearning.ai/courses/reasoning-with-o1)
## Twitter
<details open>
<summary>OpenAI Developers</summary>
- [All the questions addressed by the API team during the December 17, 2024 AMA](https://community.openai.com/t/all-the-questions-addressed-by-the-api-team-during-the-december-17-2024-ama/1059780)
- [We’re hosting an AMA for developers from 10–11 AM PT today.](https://x.com/OpenAIDevs/status/1834608585151594537)
- [Today we previewed Reinforcement Fine-Tuning](https://x.com/OpenAI/status/1865136373491208674)
- <img src="https://github.com/user-attachments/assets/8d33afdb-39ec-4c76-ab27-d132ea913c3e" width="360px">
</details>
<details>
<summary>Noam Brown</summary>
- <img src="https://github.com/user-attachments/assets/4670514c-e6fa-474f-abea-c3f6ad01e41a" width="360px">
- <img src="https://github.com/user-attachments/assets/b390ccea-9773-4a96-ba02-40d917473402" width="360px">
- <img src="https://github.com/user-attachments/assets/aa0678fa-28eb-4b2a-a6ff-c0d123568f22" width="360px">
</details>
<details>
<summary>Jason Wei</summary>
- <img src="https://github.com/user-attachments/assets/fcf1ffa7-9e92-4cb6-afd4-ba6c14f3ae32" width="360px">
- <img src="https://github.com/user-attachments/assets/0f84796d-ae11-4c50-a681-e1b33aa152d9" width="360px">
- <img src="https://github.com/user-attachments/assets/fbbf78e4-d34c-4b7b-8163-f8c7288f56a6" width="360px">
</details>
<details>
<summary>Others</summary>
- <img src="https://github.com/user-attachments/assets/9c577f52-1976-4687-a67b-fd52a104a8cc" width="360px">
- <img src="https://github.com/user-attachments/assets/d3fd109b-0c97-4a94-931e-919b3b2f75f4" width="360px">
- <img src="https://github.com/user-attachments/assets/88896f70-017d-4520-ac56-370a023cfe45" width="360px">
</details>
## Open-source
### Models
- [Alibaba Qwen Team] [QwQ](https://huggingface.co/Qwen/QwQ-32B-Preview)
- [Alibaba Qwen Team] [QvQ](https://huggingface.co/Qwen/QVQ-72B-Preview)
- [DeepSeek] [DeepSeek R1](https://huggingface.co/deepseek-ai/DeepSeek-R1)
- [OpenO1 Team] [Open-Source O1](https://opensource-o1.github.io/)
- [NovaSky] [Sky-T1](https://github.com/NovaSky-AI/SkyThought)
- [GAIR-NLP] [O1 Replication Journey: A Strategic Progress Report](https://github.com/GAIR-NLP/O1-Journey)
- [Tencent] [DRT-o1](https://github.com/krystalan/DRT-o1)
- [Skywork] [Skywork o1 Open model series](https://huggingface.co/Skywork/Skywork-o1-Open-Llama-3.1-8B)
- [Alibaba] [Marco-o1](https://github.com/AIDC-AI/Marco-o1)
- [CUHK-SZ] [HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1)
- [Steiner] [A Small Step Towards Reproducing OpenAI o1: Progress Report on the Steiner Open Source Models](https://medium.com/@peakji/a-small-step-towards-reproducing-openai-o1-b9a756a00855)
### Codebase and Others
- [HKUST] [Simple Reinforcement Learning for Reasoning](https://github.com/hkust-nlp/simpleRL-reason)
- This is a replicate of DeepSeek-R1-Zero and DeepSeek-R1 training on small models with limited data
- [StepFun][Open-Reasoner-Zero](https://github.com/Open-Reasoner-Zero/Open-Reasoner-Zero)
- [OpenRLHF Team] [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF)
- [OpenRLHF Team] [REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models](https://www.researchgate.net/publication/387487679_REINFORCE_A_SIMPLE_AND_EFFICIENT_APPROACH_FOR_ALIGNING_LARGE_LANGUAGE_MODELS) | [Code](https://github.com/OpenRLHF/OpenRLHF/blob/main/examples/scripts/train_reinforce_llama_ray.sh )
- [Berkeley AI Research] [TinyZero](https://github.com/Jiayi-Pan/TinyZero)
- [openreasoner] [OpenR](https://github.com/openreasoner/openr)
- [Maitrix.org] [LLM Reasoners](https://github.com/maitrix-org/llm-reasoners)
- [bklieger-groq] [g1: Using Llama-3.1 70b on Groq to create o1-like reasoning chains](https://github.com/bklieger-groq/g1)
- [o1-chain-of-thought] [Transcription of o1 Reasoning Traces from OpenAI blog post](https://github.com/bradhilton/o1-chain-of-thought)
## Papers
```
format:
- [title](paper link) [links]
- author1, author2, and author3...
- publisher
- code
- experimental environments and datasets
```
### Relevant Paper from OpenAI o1 [contributors](https://openai.com/openai-o1-contributions/)
- [Deliberative alignment: reasoning enables safer language models](https://openai.com/index/deliberative-alignment/)
- OpenAI
- [MLE-bench: Evaluating Machine Learning Agents on Machine Learning Engineering](https://arxiv.org/abs/2410.07095)
- Jun Shern Chan, Neil Chowdhury, Oliver Jaffe, James Aung, Dane Sherburn, Evan Mays, Giulio Starace, Kevin Liu, Leon Maksin, Tejal Patwardhan, Lilian Weng, Aleksander Mądry
- [Training Verifiers to Solve Math Word Problems](https://arxiv.org/abs/2110.14168)
- Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, John Schulman
- [Generative Language Modeling for Automated Theorem Proving](https://arxiv.org/abs/2009.03393)
- Stanislas Polu, Ilya Sutskever
- [Chain-of-Thought Prompting Elicits Reasoning in Large Language Models](https://arxiv.org/abs/2201.11903)
- Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou
- [Self-Consistency Improves Chain of Thought Reasoning in Language Models](https://arxiv.org/abs/2203.11171)
- Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou
- [Let's Verify Step by Step](https://arxiv.org/abs/2305.20050)
- Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, Karl Cobbe
- [LLM Critics Help Catch LLM Bugs](https://arxiv.org/abs/2407.00215)
- Nat McAleese, Rai Michael Pokorny, Juan Felipe Ceron Uribe, Evgenia Nitishinskaya, Maja Trebacz, Jan Leike
- [Self-critiquing models for assisting human evaluators](https://arxiv.org/pdf/2206.05802)
- William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, Jan Leike
- [Scalable Online Planning via Reinforcement Learning Fine-Tuning](https://arxiv.org/abs/2109.15316)
- Arnaud Fickinger, Hengyuan Hu, Brandon Amos, Stuart Russell, Noam Brown.
- [Improving Policies via Search in Cooperative Partially Observable Games](https://arxiv.org/abs/1912.02318)
- Adam Lerer, Hengyuan Hu, Jakob Foerster, Noam Brown.
- [From Medprompt to o1: Exploration of Run-Time Strategies for Medical Challenge Problems and Beyond](https://arxiv.org/abs/2411.03590)
- Scott McKinney
### Technical Report on o1 Models
- [Kimi k1.5: Scaling Reinforcement Learning with LLMs](https://github.com/MoonshotAI/Kimi-k1.5)
- MoonShot
- [DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf)
- DeepSeek AI
### 2025
- [Logic-RL: Unleashing LLM Reasoning with Rule-Based Reinforcement Learning](https://arxiv.org/abs/2502.14768)
- Tian Xie, Zitian Gao, Qingnan Ren, Haoming Luo, Yuqian Hong, Bryan Dai, Joey Zhou, Kai Qiu, Zhirong Wu, Chong Luo
- [Scaling Test-Time Compute Without Verification or RL is Suboptimal](https://arxiv.org/abs/2502.12118)
- Amrith Setlur, Nived Rajaraman, Sergey Levine, Aviral Kumar
- [LLMs Can Easily Learn to Reason from Demonstrations Structure, not content, is what matters!](https://arxiv.org/abs/2502.07374)
- Dacheng Li, Shiyi Cao, Tyler Griggs, Shu Liu, Xiangxi Mo, Shishir G. Patil, Matei Zaharia, Joseph E. Gonzalez, Ion Stoica
- [Demystifying Long Chain-of-Thought Reasoning in LLMs](https://arxiv.org/abs/2502.03373)
- Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, Xiang Yue
- [LIMR: Less is More for RL Scaling](https://arxiv.org/abs/2502.11886)
- Xuefeng Li, Haoyang Zou, Pengfei Liu
- [LIMO: Less is More for Reasoning](https://arxiv.org/abs/2502.03387)
- Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, Pengfei Liu
- [s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393)
- Niklas Muennighoff, Zitong Yang, Weijia Shi, Xiang Lisa Li, Li Fei-Fei, Hannaneh Hajishirzi, Luke Zettlemoyer, Percy Liang, Emmanuel Candès, Tatsunori Hashimoto
- [SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training](https://arxiv.org/abs/2501.17161v1)
- Tianzhe Chu, Yuexiang Zhai, Jihan Yang, Shengbang Tong, Saining Xie, Dale Schuurmans, Quoc V. Le, Sergey Levine, Yi Ma
- [Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling](https://arxiv.org/pdf/2501.11651)
- Zhenyu Hou, Xin Lv, Rui Lu, Jiajie Zhang, Yujiang Li, Zijun Yao, Juanzi Li, Jie Tang, Yuxiao Dong
- [Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search](https://arxiv.org/abs/2502.02508)
- Maohao Shen, Guangtao Zeng, Zhenting Qi, Zhang-Wei Hong, Zhenfang Chen, Wei Lu, Gregory Wornell, Subhro Das, David Cox, Chuang Gan
- [Distillation Quantification for Large Language Models](https://github.com/Aegis1863/LLMs-Distillation-Quantification/blob/main/paper.pdf)
- Sunbowen Lee, Junting Zhou, Chao Ao, etc.
- [rStar-Math: Small LLMs Can Master Math Reasoning with Self-Evolved Deep Thinking](https://arxiv.org/pdf/2501.04519)
- Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, Mao Yang
- [Evolving Deeper LLM Thinking](https://arxiv.org/pdf/2501.09891)
- Kuang-Huei Lee, Ian Fischer, Yueh-Hua Wu, Dave Marwood, Shumeet Baluja, Dale Schuurmans, Xinyun Chen
- [The Lessons of Developing Process Reward Models in Mathematical Reasoning](https://arxiv.org/abs/2501.07301)
- Zhenru Zhang, Chujie Zheng, Yangzhen Wu, Beichen Zhang, Runji Lin, Bowen Yu, Dayiheng Liu, Jingren Zhou, Junyang Lin
- [Towards System 2 Reasoning in LLMs: Learning How to Think With Meta Chain-of-Thought](https://arxiv.org/abs/2501.04682)
- Violet Xiang, Charlie Snell, Kanishk Gandhi, Alon Albalak, Anikait Singh, Chase Blagden, Duy Phung, Rafael Rafailov, Nathan Lile, Dakota Mahan, Louis Castricato, Jan-Philipp Franken, Nick Haber, Chelsea Finn
- [PRMBENCH: A Fine-grained and Challenging Benchmark for Process-Level Reward Models](https://arxiv.org/pdf/2501.03124)
- Mingyang Song, Zhaochen Su, Xiaoye Qu, Jiawei Zhou, Yu Cheng
- [Virgo: A Preliminary Exploration on Reproducing o1-like MLLM](https://arxiv.org/abs/2501.01904)
- Yifan Du, Zikang Liu, Yifan Li, Wayne Xin Zhao, Yuqi Huo, Bingning Wang, Weipeng Chen, Zheng Liu, Zhongyuan Wang, Ji-Rong Wen
- [Imagine while Reasoning in Space: Multimodal Visualization-of-Thought](https://arxiv.org/abs/2501.07542)
- Chengzu Li, Wenshan Wu, Huanyu Zhang, Yan Xia, Shaoguang Mao, Li Dong, Ivan Vulić, Furu Wei
- [LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMs](https://arxiv.org/abs/2501.06186)
- Omkar Thawakar, Dinura Dissanayake, Ketan More, Ritesh Thawkar, Ahmed Heakl, Noor Ahsan, Yuhao Li, Mohammed Zumri, Jean Lahoud, Rao Muhammad Anwer, Hisham Cholakkal, Ivan Laptev, Mubarak Shah, Fahad Shahbaz Khan, Salman Khan
### 2024
- [Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning](https://arxiv.org/abs/2405.10292)
- Yuexiang Zhai, Hao Bai, Zipeng Lin, Jiayi Pan, Shengbang Tong, Yifei Zhou, Alane Suhr, Saining Xie, Yann LeCun, Yi Ma, Sergey Levine
- [Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters](https://arxiv.org/abs/2408.03314)
- Charlie Snell, Jaehoon Lee, Kelvin Xu, Aviral Kumar
- [An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models](https://arxiv.org/abs/2408.00724)
- Yangzhen Wu, Zhiqing Sun, Shanda Li, Sean Welleck, Yiming Yang
- [Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling](https://www.arxiv.org/abs/2408.16737)
- Hritik Bansal, Arian Hosseini, Rishabh Agarwal, Vinh Q. Tran, Mehran Kazemi
- [Large Language Monkeys: Scaling Inference Compute with Repeated Sampling](https://arxiv.org/abs/2407.21787)
- Bradley Brown, Jordan Juravsky, Ryan Ehrlich, Ronald Clark, Quoc V. Le, Christopher Ré, Azalia Mirhoseini
- [Imitate, Explore, and Self-Improve: A Reproduction Report on Slow-thinking Reasoning Systems](https://arxiv.org/abs/2412.09413)
- Yingqian Min, Zhipeng Chen, Jinhao Jiang, Jie Chen, Jia Deng, Yiwen Hu, Yiru Tang, Jiapeng Wang, Xiaoxue Cheng, Huatong Song, Wayne Xin Zhao, Zheng Liu, Zhongyuan Wang, Ji-Rong Wen
- [Training Language Models to Self-Correct via Reinforcement Learning](https://arxiv.org/abs/2409.12917)
- Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli, Shariq Iqbal, Colton Bishop, Rebecca Roelofs, Lei M Zhang, Kay McKinney, Disha Shrivastava, Cosmin Paduraru, George Tucker, Doina Precup, Feryal Behbahani, Aleksandra Faust
- [Do NOT Think That Much for 2+3=? On the Overthinking of o1-Like LLMs](https://arxiv.org/abs/2412.21187)
- Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, Dong Yu
- [MEDEC: A Benchmark for Medical Error Detection and Correction in Clinical Notes](https://arxiv.org/abs/2412.19260)
- Asma Ben Abacha, Wen-wai Yim, Yujuan Fu, Zhaoyi Sun, Meliha Yetisgen, Fei Xia, Thomas Lin
- [Qwen2.5-Math Technical Report: Toward Mathematical Expert Model via Self-Improvement](https://arxiv.org/abs/2409.12122)
- An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, Keming Lu, Mingfeng Xue, Runji Lin, Tianyu Liu, Xingzhang Ren, Zhenru Zhang
- [Does RLHF Scale? Exploring the Impacts From Data, Model, and Method](https://arxiv.org/abs/2412.06000)
- Zhenyu Hou, Pengfan Du, Yilin Niu, Zhengxiao Du, Aohan Zeng, Xiao Liu, Minlie Huang, Hongning Wang, Jie Tang, Yuxiao Dong
- [Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering](https://arxiv.org/abs/2411.11504)
- Xinyan Guan, Yanjiang Liu, Xinyu Lu, Boxi Cao, Ben He, Xianpei Han, Le Sun, Jie Lou, Bowen Yu, Yaojie Lu, Hongyu Lin
- [Scaling of Search and Learning: A Roadmap to Reproduce o1 from Reinforcement Learning Perspective](https://arxiv.org/abs/2412.14135)
- Zhiyuan Zeng, Qinyuan Cheng, Zhangyue Yin, Bo Wang, Shimin Li, Yunhua Zhou, Qipeng Guo, Xuanjing Huang, Xipeng Qiu
- [Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking](https://arxiv.org/abs/2403.09629)
- Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, Noah D. Goodman
- https://github.com/ezelikman/quiet-star
- [Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision](https://arxiv.org/abs/2411.16579)
- Zhiheng Xi, Dingwen Yang, Jixuan Huang, Jiafu Tang, Guanyu Li, Yiwen Ding, Wei He, Boyang Hong, Shihan Do, Wenyu Zhan, Xiao Wang, Rui Zheng, Tao Ji, Xiaowei Shi, Yitao Zhai, Rongxiang Weng, Jingang Wang, Xunliang Cai, Tao Gui, Zuxuan Wu, Qi Zhang, Xipeng Qiu, Xuanjing Huang, Yu-Gang Jiang
- https://mathcritique.github.io/
- [On Designing Effective RL Reward at Training Time for LLM Reasoning](https://arxiv.org/abs/2410.15115)
- Jiaxuan Gao, Shusheng Xu, Wenjie Ye, Weilin Liu, Chuyi He, Wei Fu, Zhiyu Mei, Guangju Wang, Yi Wu
- [Generative Verifiers: Reward Modeling as Next-Token Prediction](https://arxiv.org/abs/2408.15240)
- Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, Rishabh Agarwal
- [Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning](https://arxiv.org/abs/2410.08146)
- Amrith Setlur, Chirag Nagpal, Adam Fisch, Xinyang Geng, Jacob Eisenstein, Rishabh Agarwal, Alekh Agarwal, Jonathan Berant, Aviral Kumar
- [Improve Mathematical Reasoning in Language Models by Automated Process Supervision](https://arxiv.org/abs/2406.06592)
- Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, Jiao Sun, Abhinav Rastogi
- [Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations](https://arxiv.org/abs/2312.08935)
- Peiyi Wang, Lei Li, Zhihong Shao, R.X. Xu, Damai Dai, Yifei Li, Deli Chen, Y.Wu, Zhifang Sui
- [Planning In Natural Language Improves LLM Search For Code Generation](https://arxiv.org/abs/2409.03733)
- Evan Wang, Federico Cassano, Catherine Wu, Yunfeng Bai, Will Song, Vaskar Nath, Ziwen Han, Sean Hendryx, Summer Yue, Hugh Zhang
- [PROCESSBENCH: Identifying Process Errors in Mathematical Reasoning](https://arxiv.org/pdf/2412.06559)
- Chujie Zheng, Zhenru Zhang, Beichen Zhang, Runji Lin, Keming Lu, Bowen Yu, Dayiheng Liu, Jingren Zhou, Junyang Lin
- [AFlow: Automating Agentic Workflow Generation](https://arxiv.org/abs/2410.10762)
- Jiayi Zhang, Jinyu Xiang, Zhaoyang Yu, Fengwei Teng, Xionghui Chen, Jiaqi Chen, Mingchen Zhuge, Xin Cheng, Sirui Hong, Jinlin Wang, Bingnan Zheng, Bang Liu, Yuyu Luo, Chenglin Wu
- [Interpretable Contrastive Monte Carlo Tree Search Reasoning](https://arxiv.org/abs/2410.01707)
- Zitian Gao, Boye Niu, Xuzheng He, Haotian Xu, Hongzhang Liu, Aiwei Liu, Xuming Hu, Lijie Wen
- [Agent Q: Advanced Reasoning and Learning for Autonomous AI Agents](https://arxiv.org/abs/2408.07199)
- Pranav Putta, Edmund Mills, Naman Garg, Sumeet Motwani, Chelsea Finn, Divyansh Garg, Rafael Rafailov
- [Mixture-of-Agents Enhances Large Language Model Capabilities](https://arxiv.org/abs/2406.04692)
- Junlin Wang, Jue Wang, Ben Athiwaratkun, Ce Zhang, James Zou
- [Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models](https://arxiv.org/abs/2402.03271)
- Zhiyuan Hu, Chumin Liu, Xidong Feng, Yilun Zhao, See-Kiong Ng, Anh Tuan Luu, Junxian He, Pang Wei Koh, Bryan Hooi
- [Advancing LLM Reasoning Generalists with Preference Trees](https://arxiv.org/abs/2404.02078)
- Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan et al.
- [Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing](https://arxiv.org/abs/2404.12253)
- Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, and Dong Yu.
- [AlphaMath Almost Zero: Process Supervision Without Process](https://arxiv.org/abs/2405.03553)
- Guoxin Chen, Minpeng Liao, Chengxi Li, Kai Fan.
- [ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search](https://arxiv.org/abs/2406.03816)
- Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang.
- [Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search](https://arxiv.org/abs/2412.18319)
- Huanjin Yao, Jiaxing Huang, Wenhao Wu, Jingyi Zhang, Yibo Wang, Shunyu Liu, Yingjie Wang, Yuxin Song, Haocheng Feng, Li Shen, Dacheng Tao
- [Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models](https://arxiv.org/abs/2411.14432)
- Yuhao Dong, Zuyan Liu, Hai-Long Sun, Jingkang Yang, Winston Hu, Yongming Rao, Ziwei Liu
- [MindStar: Enhancing Math Reasoning in Pre-trained LLMs at Inference Time](https://arxiv.org/abs/2405.16265)
- Jikun Kang, Xin Zhe Li, Xi Chen, Amirreza Kazemi, Qianyi Sun, Boxing Chen, Dong Li, Xu He, Quan He, Feng Wen, Jianye Hao, Jun Yao.
- [Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning](https://arxiv.org/abs/2405.00451)
- Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P. Lillicrap, Kenji Kawaguchi, Michael Shieh.
- [When is Tree Search Useful for LLM Planning? It Depends on the Discriminator](https://arxiv.org/abs/2402.10890)
- Ziru Chen, Michael White, Raymond Mooney, Ali Payani, Yu Su, Huan Sun
- [Chain of Thought Empowers Transformers to Solve Inherently Serial Problems](https://arxiv.org/abs/2402.12875)
- Zhiyuan Li, Hong Liu, Denny Zhou, Tengyu Ma.
- [To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning](https://arxiv.org/abs/2409.12183)
- Zayne Sprague, Fangcong Yin, Juan Diego Rodriguez, Dongwei Jiang, Manya Wadhwa, Prasann Singhal, Xinyu Zhao, Xi Ye, Kyle Mahowald, Greg Durrett
- [Do Large Language Models Latently Perform Multi-Hop Reasoning?](https://arxiv.org/abs/2402.16837)
- Sohee Yang, Elena Gribovskaya, Nora Kassner, Mor Geva, Sebastian Riedel
- [Chain-of-Thought Reasoning Without Prompting](https://arxiv.org/pdf/2402.10200)
- Xuezhi Wang, Denny Zhou
- [Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers](https://arxiv.org/abs/2408.06195)
- Zhenting Qi, Mingyuan Ma, Jiahang Xu, Li Lyna Zhang, Fan Yang, Mao Yang
- [Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs](https://arxiv.org/abs/2406.09136)
- Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin
- [ReFT: Reasoning with Reinforced Fine-Tuning](https://arxiv.org/abs/2401.08967)
- Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng Sun, Xiaoran Jin, Hang Li
- [VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment](https://arxiv.org/abs/2410.01679)
- Amirhossein Kazemnejad, Milad Aghajohari, Eva Portelance, Alessandro Sordoni, Siva Reddy, Aaron Courville, Nicolas Le Roux
- [Stream of Search (SoS): Learning to Search in Language](https://arxiv.org/abs/2404.03683)
- Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, Noah D. Goodman
- [GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models](https://arxiv.org/abs/2410.05229)
- Iman Mirzadeh, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, Mehrdad Farajtabar
- [Evaluation of OpenAI o1: Opportunities and Challenges of AGI](https://arxiv.org/abs/2409.18486)
- Tianyang Zhong, Zhengliang Liu, Yi Pan, Yutong Zhang, Yifan Zhou, Shizhe Liang, Zihao Wu, Yanjun Lyu, Peng Shu, Xiaowei Yu, Chao Cao, Hanqi Jiang, Hanxu Chen, Yiwei Li, Junhao Chen, etc.
- [Evaluating LLMs at Detecting Errors in LLM Responses](https://arxiv.org/abs/2404.03602)
- Ryo Kamoi, Sarkar Snigdha Sarathi Das, Renze Lou, Jihyun Janice Ahn, Yilun Zhao, Xiaoxin Lu, Nan Zhang, Yusen Zhang, Ranran Haoran Zhang, Sujeeth Reddy Vummanthala, Salika Dave, Shaobo Qin, Arman Cohan, Wenpeng Yin, Rui Zhang
- [On The Planning Abilities of OpenAI's o1 Models: Feasibility, Optimality, and Generalizability](https://arxiv.org/abs/2409.19924)
- Kevin Wang, Junbo Li, Neel P. Bhatt, Yihan Xi, Qiang Liu, Ufuk Topcu, Zhangyang Wang
- [Not All LLM Reasoners Are Created Equal](https://arxiv.org/abs/2410.01748)
- Arian Hosseini, Alessandro Sordoni, Daniel Toyama, Aaron Courville, Rishabh Agarwal
- [LLMs Still Can't Plan; Can LRMs? A Preliminary Evaluation of OpenAI's o1 on PlanBench](https://arxiv.org/abs/2409.13373)
- Karthik Valmeekam, Kaya Stechly, Subbarao Kambhampati
- [A Comparative Study on Reasoning Patterns of OpenAI's o1 Model](https://arxiv.org/abs/2410.13639)
- Siwei Wu, Zhongyuan Peng, Xinrun Du, Tuney Zheng, Minghao Liu, Jialong Wu, Jiachen Ma, Yizhi Li, Jian Yang, Wangchunshu Zhou, Qunshu Lin, Junbo Zhao, Zhaoxiang Zhang, Wenhao Huang, Ge Zhang, Chenghua Lin, J.H. Liu
- [Thinking LLMs: General Instruction Following with Thought Generation](https://arxiv.org/abs/2410.10630)
- Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar
- [Exploring the Compositional Deficiency of Large Language Models in Mathematical Reasoning Through Trap Problems](https://arxiv.org/pdf/2405.06680)
- Jun Zhao, Jingqi Tong, Yurong Mou, Ming Zhang, Qi Zhang, Xuanjing Huang
- [V-STaR: Training Verifiers for Self-Taught Reasoners](https://arxiv.org/abs/2402.06457)
- Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, Rishabh Agarwal
- [CPL: Critical Plan Step Learning Boosts LLM Generalization in Reasoning Tasks](https://arxiv.org/abs/2409.08642)
- Tianlong Wang, Junzhe Chen, Xueting Han, Jing Bai
- [RLEF: Grounding Code LLMs in Execution Feedback with Reinforcement Learning](https://arxiv.org/abs/2410.02089)
- Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar
- [Q*: Improving Multi-step Reasoning for LLMs with Deliberative Planning](https://arxiv.org/abs/2406.14283)
- Chaojie Wang, Yanchen Deng, Zhiyi Lyu, Liang Zeng, Jujie He, Shuicheng Yan, Bo An
### 2023
- [Training Chain-of-Thought via Latent-Variable Inference](https://arxiv.org/pdf/2312.02179)
- Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous
- [Alphazero-like Tree-Search can Guide Large Language Model Decoding and Training](https://arxiv.org/abs/2309.17179)
- Xidong Feng, Ziyu Wan, Muning Wen, Stephen Marcus McAleer, Ying Wen, Weinan Zhang, Jun Wang
- [OVM, Outcome-supervised Value Models for Planning in Mathematical Reasoning](https://arxiv.org/pdf/2311.09724)
- Fei Yu, Anningzhe Gao, Benyou Wang
- [Reasoning with Language Model is Planning with World Model](https://arxiv.org/abs/2305.14992)
- Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu
- [Don’t throw away your value model! Generating more preferable text with Value-Guided Monte-Carlo Tree Search decoding](https://arxiv.org/abs/2309.15028)
- Liu, Jiacheng, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and Asli Celikyilmaz.
- [Certified reasoning with language models](https://arxiv.org/pdf/2306.04031)
- Gabriel Poesia, Kanishk Gandhi, Eric Zelikman, Noah D. Goodman
- [Large Language Models Cannot Self-Correct Reasoning Yet](https://arxiv.org/abs/2310.01798)
- Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou
### 2022
- [Chain of Thought Imitation with Procedure Cloning](https://arxiv.org/abs/2205.10816)
- Mengjiao Yang, Dale Schuurmans, Pieter Abbeel, Ofir Nachum.
- [STaR: Bootstrapping Reasoning With Reasoning](https://arxiv.org/abs/2203.14465)
- Eric Zelikman, Yuhuai Wu, Jesse Mu, Noah D. Goodman
- [Solving math word problems with processand outcome-based feedback](https://arxiv.org/abs/2211.14275)
- Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, Irina Higgins
### 2021
- [Scaling Scaling Laws with Board Games](http://arxiv.org/abs/2104.03113)
- Andy L. Jones.
- [Show Your Work: Scratchpads for Intermediate Computation with Language Models](https://arxiv.org/pdf/2112.00114)
- Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, Augustus Odena
### 2017
- [Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm](https://arxiv.org/abs/1712.01815v1)
- David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, Demis Hassabis. | {
"source": "hijkzzz/Awesome-LLM-Strawberry",
"title": "README.md",
"url": "https://github.com/hijkzzz/Awesome-LLM-Strawberry/blob/main/README.md",
"date": "2024-09-15T04:40:16",
"stars": 6497,
"description": "A collection of LLM papers, blogs, and projects, with a focus on OpenAI o1 🍓 and reasoning techniques.",
"file_size": 31807
} |
# 小智 AI 聊天机器人 (XiaoZhi AI Chatbot)
(中文 | [English](README_en.md) | [日本語](README_ja.md))
这是虾哥的第一个硬件作品。
👉 [ESP32+SenseVoice+Qwen72B打造你的AI聊天伴侣!【bilibili】](https://www.bilibili.com/video/BV11msTenEH3/)
👉 [给小智装上 DeepSeek 的聪明大脑【bilibili】](https://www.bilibili.com/video/BV1GQP6eNEFG/)
👉 [手工打造你的 AI 女友,新手入门教程【bilibili】](https://www.bilibili.com/video/BV1XnmFYLEJN/)
## 项目目的
本项目是一个开源项目,以 MIT 许可证发布,允许任何人免费使用,并可以用于商业用途。
我们希望通过这个项目,能够帮助更多人入门 AI 硬件开发,了解如何将当下飞速发展的大语言模型应用到实际的硬件设备中。无论你是对 AI 感兴趣的学生,还是想要探索新技术的开发者,都可以通过这个项目获得宝贵的学习经验。
欢迎所有人参与到项目的开发和改进中来。如果你有任何想法或建议,请随时提出 Issue 或加入群聊。
学习交流 QQ 群:376893254
## 已实现功能
- Wi-Fi / ML307 Cat.1 4G
- BOOT 键唤醒和打断,支持点击和长按两种触发方式
- 离线语音唤醒 [ESP-SR](https://github.com/espressif/esp-sr)
- 流式语音对话(WebSocket 或 UDP 协议)
- 支持国语、粤语、英语、日语、韩语 5 种语言识别 [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
- 声纹识别,识别是谁在喊 AI 的名字 [3D Speaker](https://github.com/modelscope/3D-Speaker)
- 大模型 TTS(火山引擎 或 CosyVoice)
- 大模型 LLM(Qwen, DeepSeek, Doubao)
- 可配置的提示词和音色(自定义角色)
- 短期记忆,每轮对话后自我总结
- OLED / LCD 显示屏,显示信号强弱或对话内容
- 支持 LCD 显示图片表情
- 支持多语言(中文、英文)
## 硬件部分
### 面包板手工制作实践
详见飞书文档教程:
👉 [《小智 AI 聊天机器人百科全书》](https://ccnphfhqs21z.feishu.cn/wiki/F5krwD16viZoF0kKkvDcrZNYnhb?from=from_copylink)
面包板效果图如下:

### 已支持的开源硬件
- <a href="https://oshwhub.com/li-chuang-kai-fa-ban/li-chuang-shi-zhan-pai-esp32-s3-kai-fa-ban" target="_blank" title="立创·实战派 ESP32-S3 开发板">立创·实战派 ESP32-S3 开发板</a>
- <a href="https://github.com/espressif/esp-box" target="_blank" title="乐鑫 ESP32-S3-BOX3">乐鑫 ESP32-S3-BOX3</a>
- <a href="https://docs.m5stack.com/zh_CN/core/CoreS3" target="_blank" title="M5Stack CoreS3">M5Stack CoreS3</a>
- <a href="https://docs.m5stack.com/en/atom/Atomic%20Echo%20Base" target="_blank" title="AtomS3R + Echo Base">AtomS3R + Echo Base</a>
- <a href="https://docs.m5stack.com/en/core/ATOM%20Matrix" target="_blank" title="AtomMatrix + Echo Base">AtomMatrix + Echo Base</a>
- <a href="https://gf.bilibili.com/item/detail/1108782064" target="_blank" title="神奇按钮 2.4">神奇按钮 2.4</a>
- <a href="https://www.waveshare.net/shop/ESP32-S3-Touch-AMOLED-1.8.htm" target="_blank" title="微雪电子 ESP32-S3-Touch-AMOLED-1.8">微雪电子 ESP32-S3-Touch-AMOLED-1.8</a>
- <a href="https://github.com/Xinyuan-LilyGO/T-Circle-S3" target="_blank" title="LILYGO T-Circle-S3">LILYGO T-Circle-S3</a>
- <a href="https://oshwhub.com/tenclass01/xmini_c3" target="_blank" title="虾哥 Mini C3">虾哥 Mini C3</a>
- <a href="https://oshwhub.com/movecall/moji-xiaozhi-ai-derivative-editi" target="_blank" title="Movecall Moji ESP32S3">Moji 小智AI衍生版</a>
- <a href="https://github.com/WMnologo/xingzhi-ai" target="_blank" title="无名科技Nologo-星智-1.54">无名科技Nologo-星智-1.54TFT</a>
- <a href="https://github.com/WMnologo/xingzhi-ai" target="_blank" title="无名科技Nologo-星智-0.96">无名科技Nologo-星智-0.96TFT</a>
<div style="display: flex; justify-content: space-between;">
<a href="docs/v1/lichuang-s3.jpg" target="_blank" title="立创·实战派 ESP32-S3 开发板">
<img src="docs/v1/lichuang-s3.jpg" width="240" />
</a>
<a href="docs/v1/espbox3.jpg" target="_blank" title="乐鑫 ESP32-S3-BOX3">
<img src="docs/v1/espbox3.jpg" width="240" />
</a>
<a href="docs/v1/m5cores3.jpg" target="_blank" title="M5Stack CoreS3">
<img src="docs/v1/m5cores3.jpg" width="240" />
</a>
<a href="docs/v1/atoms3r.jpg" target="_blank" title="AtomS3R + Echo Base">
<img src="docs/v1/atoms3r.jpg" width="240" />
</a>
<a href="docs/v1/magiclick.jpg" target="_blank" title="神奇按钮 2.4">
<img src="docs/v1/magiclick.jpg" width="240" />
</a>
<a href="docs/v1/waveshare.jpg" target="_blank" title="微雪电子 ESP32-S3-Touch-AMOLED-1.8">
<img src="docs/v1/waveshare.jpg" width="240" />
</a>
<a href="docs/lilygo-t-circle-s3.jpg" target="_blank" title="LILYGO T-Circle-S3">
<img src="docs/lilygo-t-circle-s3.jpg" width="240" />
</a>
<a href="docs/xmini-c3.jpg" target="_blank" title="虾哥 Mini C3">
<img src="docs/xmini-c3.jpg" width="240" />
</a>
<a href="docs/v1/movecall-moji-esp32s3.jpg" target="_blank" title="Movecall Moji 小智AI衍生版">
<img src="docs/v1/movecall-moji-esp32s3.jpg" width="240" />
</a>
<a href="docs/v1/wmnologo_xingzhi_1.54.jpg" target="_blank" title="无名科技Nologo-星智-1.54">
<img src="docs/v1/wmnologo_xingzhi_1.54.jpg" width="240" />
</a>
<a href="docs/v1/wmnologo_xingzhi_0.96.jpg" target="_blank" title="无名科技Nologo-星智-0.96">
<img src="docs/v1/wmnologo_xingzhi_0.96.jpg" width="240" />
</a>
</div>
## 固件部分
### 免开发环境烧录
新手第一次操作建议先不要搭建开发环境,直接使用免开发环境烧录的固件。
固件默认接入 [xiaozhi.me](https://xiaozhi.me) 官方服务器,目前个人用户注册账号可以免费使用 Qwen 实时模型。
👉 [Flash烧录固件(无IDF开发环境)](https://ccnphfhqs21z.feishu.cn/wiki/Zpz4wXBtdimBrLk25WdcXzxcnNS)
### 开发环境
- Cursor 或 VSCode
- 安装 ESP-IDF 插件,选择 SDK 版本 5.3 或以上
- Linux 比 Windows 更好,编译速度快,也免去驱动问题的困扰
- 使用 Google C++ 代码风格,提交代码时请确保符合规范
## 智能体配置
如果你已经拥有一个小智 AI 聊天机器人设备,可以登录 [xiaozhi.me](https://xiaozhi.me) 控制台进行配置。
👉 [后台操作视频教程(旧版界面)](https://www.bilibili.com/video/BV1jUCUY2EKM/)
## 技术原理与私有化部署
👉 [一份详细的 WebSocket 通信协议文档](docs/websocket.md)
在个人电脑上部署服务器,可以参考另一位作者同样以 MIT 许可证开源的项目 [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server)
## Star History
<a href="https://star-history.com/#78/xiaozhi-esp32&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date" />
</picture>
</a> | {
"source": "78/xiaozhi-esp32",
"title": "README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 5587
} |
# XiaoZhi AI Chatbot
([中文](README.md) | English | [日本語](README_ja.md))
This is Terrence's first hardware project.
👉 [Build your AI chat companion with ESP32+SenseVoice+Qwen72B!【bilibili】](https://www.bilibili.com/video/BV11msTenEH3/)
👉 [Equipping XiaoZhi with DeepSeek's smart brain【bilibili】](https://www.bilibili.com/video/BV1GQP6eNEFG/)
👉 [Build your own AI companion, a beginner's guide【bilibili】](https://www.bilibili.com/video/BV1XnmFYLEJN/)
## Project Purpose
This is an open-source project released under the MIT license, allowing anyone to use it freely, including for commercial purposes.
Through this project, we aim to help more people get started with AI hardware development and understand how to implement rapidly evolving large language models in actual hardware devices. Whether you're a student interested in AI or a developer exploring new technologies, this project offers valuable learning experiences.
Everyone is welcome to participate in the project's development and improvement. If you have any ideas or suggestions, please feel free to raise an Issue or join the chat group.
Learning & Discussion QQ Group: 376893254
## Implemented Features
- Wi-Fi / ML307 Cat.1 4G
- BOOT button wake-up and interruption, supporting both click and long-press triggers
- Offline voice wake-up [ESP-SR](https://github.com/espressif/esp-sr)
- Streaming voice dialogue (WebSocket or UDP protocol)
- Support for 5 languages: Mandarin, Cantonese, English, Japanese, Korean [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
- Voice print recognition to identify who's calling AI's name [3D Speaker](https://github.com/modelscope/3D-Speaker)
- Large model TTS (Volcano Engine or CosyVoice)
- Large Language Models (Qwen, DeepSeek, Doubao)
- Configurable prompts and voice tones (custom characters)
- Short-term memory, self-summarizing after each conversation round
- OLED / LCD display showing signal strength or conversation content
- Support for LCD image expressions
- Multi-language support (Chinese, English)
## Hardware Section
### Breadboard DIY Practice
See the Feishu document tutorial:
👉 [XiaoZhi AI Chatbot Encyclopedia](https://ccnphfhqs21z.feishu.cn/wiki/F5krwD16viZoF0kKkvDcrZNYnhb?from=from_copylink)
Breadboard demonstration:

### Supported Open Source Hardware
- <a href="https://oshwhub.com/li-chuang-kai-fa-ban/li-chuang-shi-zhan-pai-esp32-s3-kai-fa-ban" target="_blank" title="LiChuang ESP32-S3 Development Board">LiChuang ESP32-S3 Development Board</a>
- <a href="https://github.com/espressif/esp-box" target="_blank" title="Espressif ESP32-S3-BOX3">Espressif ESP32-S3-BOX3</a>
- <a href="https://docs.m5stack.com/zh_CN/core/CoreS3" target="_blank" title="M5Stack CoreS3">M5Stack CoreS3</a>
- <a href="https://docs.m5stack.com/en/atom/Atomic%20Echo%20Base" target="_blank" title="AtomS3R + Echo Base">AtomS3R + Echo Base</a>
- <a href="https://docs.m5stack.com/en/core/ATOM%20Matrix" target="_blank" title="AtomMatrix + Echo Base">AtomMatrix + Echo Base</a>
- <a href="https://gf.bilibili.com/item/detail/1108782064" target="_blank" title="Magic Button 2.4">Magic Button 2.4</a>
- <a href="https://www.waveshare.net/shop/ESP32-S3-Touch-AMOLED-1.8.htm" target="_blank" title="Waveshare ESP32-S3-Touch-AMOLED-1.8">Waveshare ESP32-S3-Touch-AMOLED-1.8</a>
- <a href="https://github.com/Xinyuan-LilyGO/T-Circle-S3" target="_blank" title="LILYGO T-Circle-S3">LILYGO T-Circle-S3</a>
- <a href="https://oshwhub.com/tenclass01/xmini_c3" target="_blank" title="XiaGe Mini C3">XiaGe Mini C3</a>
- <a href="https://oshwhub.com/movecall/moji-xiaozhi-ai-derivative-editi" target="_blank" title="Movecall Moji ESP32S3">Moji XiaoZhi AI Derivative Version</a>
<div style="display: flex; justify-content: space-between;">
<a href="docs/v1/lichuang-s3.jpg" target="_blank" title="LiChuang ESP32-S3 Development Board">
<img src="docs/v1/lichuang-s3.jpg" width="240" />
</a>
<a href="docs/v1/espbox3.jpg" target="_blank" title="Espressif ESP32-S3-BOX3">
<img src="docs/v1/espbox3.jpg" width="240" />
</a>
<a href="docs/v1/m5cores3.jpg" target="_blank" title="M5Stack CoreS3">
<img src="docs/v1/m5cores3.jpg" width="240" />
</a>
<a href="docs/v1/atoms3r.jpg" target="_blank" title="AtomS3R + Echo Base">
<img src="docs/v1/atoms3r.jpg" width="240" />
</a>
<a href="docs/AtomMatrix-echo-base.jpg" target="_blank" title="AtomMatrix-echo-base + Echo Base">
<img src="docs/AtomMatrix-echo-base.jpg" width="240" />
</a>
<a href="docs/v1/magiclick.jpg" target="_blank" title="MagiClick 2.4">
<img src="docs/v1/magiclick.jpg" width="240" />
</a>
<a href="docs/v1/waveshare.jpg" target="_blank" title="Waveshare ESP32-S3-Touch-AMOLED-1.8">
<img src="docs/v1/waveshare.jpg" width="240" />
</a>
<a href="docs/lilygo-t-circle-s3.jpg" target="_blank" title="LILYGO T-Circle-S3">
<img src="docs/lilygo-t-circle-s3.jpg" width="240" />
</a>
<a href="docs/xmini-c3.jpg" target="_blank" title="Xmini C3">
<img src="docs/xmini-c3.jpg" width="240" />
</a>
<a href="docs/v1/movecall-moji-esp32s3.jpg" target="_blank" title="Moji">
<img src="docs/v1/movecall-moji-esp32s3.jpg" width="240" />
</a>
</div>
## Firmware Section
### Flashing Without Development Environment
For beginners, it's recommended to first use the firmware that can be flashed without setting up a development environment.
The firmware connects to the official [xiaozhi.me](https://xiaozhi.me) server by default. Currently, personal users can register an account to use the Qwen real-time model for free.
👉 [Flash Firmware Guide (No IDF Environment)](https://ccnphfhqs21z.feishu.cn/wiki/Zpz4wXBtdimBrLk25WdcXzxcnNS)
### Development Environment
- Cursor or VSCode
- Install ESP-IDF plugin, select SDK version 5.3 or above
- Linux is preferred over Windows for faster compilation and fewer driver issues
- Use Google C++ code style, ensure compliance when submitting code
## AI Agent Configuration
If you already have a XiaoZhi AI chatbot device, you can configure it through the [xiaozhi.me](https://xiaozhi.me) console.
👉 [Backend Operation Tutorial (Old Interface)](https://www.bilibili.com/video/BV1jUCUY2EKM/)
## Technical Principles and Private Deployment
👉 [Detailed WebSocket Communication Protocol Documentation](docs/websocket.md)
For server deployment on personal computers, refer to another MIT-licensed project [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server)
## Star History
<a href="https://star-history.com/#78/xiaozhi-esp32&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date" />
</picture>
</a> | {
"source": "78/xiaozhi-esp32",
"title": "README_en.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/README_en.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 6983
} |
# シャオジー AI チャットボット
([中文](README.md) | [English](README_en.md) | 日本語)
これは シャーガー(Terrence)の最初のハードウェア作品です。
👉 [ESP32+SenseVoice+Qwen72Bで AI チャット仲間を作ろう!【bilibili】](https://www.bilibili.com/video/BV11msTenEH3/)
👉 [シャオジーに DeepSeek のスマートな頭脳を搭載【bilibili】](https://www.bilibili.com/video/BV1GQP6eNEFG/)
👉 [自分だけの AI パートナーを作る、初心者向けガイド【bilibili】](https://www.bilibili.com/video/BV1XnmFYLEJN/)
## プロジェクトの目的
このプロジェクトは MIT ライセンスの下で公開されているオープンソースプロジェクトで、商用利用を含め、誰でも自由に使用することができます。
このプロジェクトを通じて、より多くの人々が AI ハードウェア開発を始め、急速に進化している大規模言語モデルを実際のハードウェアデバイスに実装する方法を理解できるようになることを目指しています。AI に興味のある学生でも、新しい技術を探求する開発者でも、このプロジェクトから貴重な学習経験を得ることができます。
プロジェクトの開発と改善には誰でも参加できます。アイデアや提案がありましたら、Issue を立てるかチャットグループにご参加ください。
学習・交流 QQ グループ:376893254
## 実装済みの機能
- Wi-Fi / ML307 Cat.1 4G
- BOOT ボタンによる起動と中断、クリックと長押しの2種類のトリガーに対応
- オフライン音声起動 [ESP-SR](https://github.com/espressif/esp-sr)
- ストリーミング音声対話(WebSocket または UDP プロトコル)
- 5言語対応:標準中国語、広東語、英語、日本語、韓国語 [SenseVoice](https://github.com/FunAudioLLM/SenseVoice)
- 話者認識、AI の名前を呼んでいる人を識別 [3D Speaker](https://github.com/modelscope/3D-Speaker)
- 大規模モデル TTS(Volcano Engine または CosyVoice)
- 大規模言語モデル(Qwen, DeepSeek, Doubao)
- 設定可能なプロンプトと音声トーン(カスタムキャラクター)
- 短期記憶、各会話ラウンド後の自己要約
- OLED / LCD ディスプレイ、信号強度や会話内容を表示
- LCD での画像表情表示に対応
- 多言語対応(中国語、英語)
## ハードウェア部分
### ブレッドボード DIY 実践
Feishu ドキュメントチュートリアルをご覧ください:
👉 [シャオジー AI チャットボット百科事典](https://ccnphfhqs21z.feishu.cn/wiki/F5krwD16viZoF0kKkvDcrZNYnhb?from=from_copylink)
ブレッドボードのデモ:

### サポートされているオープンソースハードウェア
- <a href="https://oshwhub.com/li-chuang-kai-fa-ban/li-chuang-shi-zhan-pai-esp32-s3-kai-fa-ban" target="_blank" title="LiChuang ESP32-S3 開発ボード">LiChuang ESP32-S3 開発ボード</a>
- <a href="https://github.com/espressif/esp-box" target="_blank" title="Espressif ESP32-S3-BOX3">Espressif ESP32-S3-BOX3</a>
- <a href="https://docs.m5stack.com/zh_CN/core/CoreS3" target="_blank" title="M5Stack CoreS3">M5Stack CoreS3</a>
- <a href="https://docs.m5stack.com/en/atom/Atomic%20Echo%20Base" target="_blank" title="AtomS3R + Echo Base">AtomS3R + Echo Base</a>
- <a href="https://docs.m5stack.com/en/core/ATOM%20Matrix" target="_blank" title="AtomMatrix + Echo Base">AtomMatrix + Echo Base</a>
- <a href="https://gf.bilibili.com/item/detail/1108782064" target="_blank" title="マジックボタン 2.4">マジックボタン 2.4</a>
- <a href="https://www.waveshare.net/shop/ESP32-S3-Touch-AMOLED-1.8.htm" target="_blank" title="Waveshare ESP32-S3-Touch-AMOLED-1.8">Waveshare ESP32-S3-Touch-AMOLED-1.8</a>
- <a href="https://github.com/Xinyuan-LilyGO/T-Circle-S3" target="_blank" title="LILYGO T-Circle-S3">LILYGO T-Circle-S3</a>
- <a href="https://oshwhub.com/tenclass01/xmini_c3" target="_blank" title="XiaGe Mini C3">XiaGe Mini C3</a>
- <a href="https://oshwhub.com/movecall/moji-xiaozhi-ai-derivative-editi" target="_blank" title="Movecall Moji ESP32S3">Moji シャオジー AI 派生版</a>
<div style="display: flex; justify-content: space-between;">
<a href="docs/v1/lichuang-s3.jpg" target="_blank" title="LiChuang ESP32-S3 開発ボード">
<img src="docs/v1/lichuang-s3.jpg" width="240" />
</a>
<a href="docs/v1/espbox3.jpg" target="_blank" title="Espressif ESP32-S3-BOX3">
<img src="docs/v1/espbox3.jpg" width="240" />
</a>
<a href="docs/v1/m5cores3.jpg" target="_blank" title="M5Stack CoreS3">
<img src="docs/v1/m5cores3.jpg" width="240" />
</a>
<a href="docs/v1/atoms3r.jpg" target="_blank" title="AtomS3R + Echo Base">
<img src="docs/v1/atoms3r.jpg" width="240" />
</a>
<a href="docs/v1/magiclick.jpg" target="_blank" title="MagiClick 2.4">
<img src="docs/v1/magiclick.jpg" width="240" />
</a>
<a href="docs/v1/waveshare.jpg" target="_blank" title="Waveshare ESP32-S3-Touch-AMOLED-1.8">
<img src="docs/v1/waveshare.jpg" width="240" />
</a>
<a href="docs/lilygo-t-circle-s3.jpg" target="_blank" title="LILYGO T-Circle-S3">
<img src="docs/lilygo-t-circle-s3.jpg" width="240" />
</a>
<a href="docs/xmini-c3.jpg" target="_blank" title="Xmini C3">
<img src="docs/xmini-c3.jpg" width="240" />
</a>
<a href="docs/v1/movecall-moji-esp32s3.jpg" target="_blank" title="Moji">
<img src="docs/v1/movecall-moji-esp32s3.jpg" width="240" />
</a>
</div>
## ファームウェア部分
### 開発環境なしのフラッシュ
初心者の方は、まず開発環境のセットアップなしでフラッシュできるファームウェアを使用することをお勧めします。
ファームウェアはデフォルトで公式 [xiaozhi.me](https://xiaozhi.me) サーバーに接続します。現在、個人ユーザーはアカウントを登録することで、Qwen リアルタイムモデルを無料で使用できます。
👉 [フラッシュファームウェアガイド(IDF環境なし)](https://ccnphfhqs21z.feishu.cn/wiki/Zpz4wXBtdimBrLk25WdcXzxcnNS)
### 開発環境
- Cursor または VSCode
- ESP-IDF プラグインをインストール、SDK バージョン 5.3 以上を選択
- Linux は Windows より好ましい(コンパイルが速く、ドライバーの問題も少ない)
- Google C++ コードスタイルを使用、コード提出時にはコンプライアンスを確認
## AI エージェント設定
シャオジー AI チャットボットデバイスをお持ちの場合は、[xiaozhi.me](https://xiaozhi.me) コンソールで設定できます。
👉 [バックエンド操作チュートリアル(旧インターフェース)](https://www.bilibili.com/video/BV1jUCUY2EKM/)
## 技術原理とプライベートデプロイメント
👉 [詳細な WebSocket 通信プロトコルドキュメント](docs/websocket.md)
個人のコンピュータでのサーバーデプロイメントについては、同じく MIT ライセンスで公開されている別のプロジェクト [xiaozhi-esp32-server](https://github.com/xinnan-tech/xiaozhi-esp32-server) を参照してください。
## スター履歴
<a href="https://star-history.com/#78/xiaozhi-esp32&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=78/xiaozhi-esp32&type=Date" />
</picture>
</a> | {
"source": "78/xiaozhi-esp32",
"title": "README_ja.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/README_ja.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 5480
} |
以下是一份基于代码实现整理的 WebSocket 通信协议文档,概述客户端(设备)与服务器之间如何通过 WebSocket 进行交互。该文档仅基于所提供的代码推断,实际部署时可能需要结合服务器端实现进行进一步确认或补充。
---
## 1. 总体流程概览
1. **设备端初始化**
- 设备上电、初始化 `Application`:
- 初始化音频编解码器、显示屏、LED 等
- 连接网络
- 创建并初始化实现 `Protocol` 接口的 WebSocket 协议实例(`WebsocketProtocol`)
- 进入主循环等待事件(音频输入、音频输出、调度任务等)。
2. **建立 WebSocket 连接**
- 当设备需要开始语音会话时(例如用户唤醒、手动按键触发等),调用 `OpenAudioChannel()`:
- 根据编译配置获取 WebSocket URL(`CONFIG_WEBSOCKET_URL`)
- 设置若干请求头(`Authorization`, `Protocol-Version`, `Device-Id`, `Client-Id`)
- 调用 `Connect()` 与服务器建立 WebSocket 连接
3. **发送客户端 “hello” 消息**
- 连接成功后,设备会发送一条 JSON 消息,示例结构如下:
```json
{
"type": "hello",
"version": 1,
"transport": "websocket",
"audio_params": {
"format": "opus",
"sample_rate": 16000,
"channels": 1,
"frame_duration": 60
}
}
```
- 其中 `"frame_duration"` 的值对应 `OPUS_FRAME_DURATION_MS`(例如 60ms)。
4. **服务器回复 “hello”**
- 设备等待服务器返回一条包含 `"type": "hello"` 的 JSON 消息,并检查 `"transport": "websocket"` 是否匹配。
- 如果匹配,则认为服务器已就绪,标记音频通道打开成功。
- 如果在超时时间(默认 10 秒)内未收到正确回复,认为连接失败并触发网络错误回调。
5. **后续消息交互**
- 设备端和服务器端之间可发送两种主要类型的数据:
1. **二进制音频数据**(Opus 编码)
2. **文本 JSON 消息**(用于传输聊天状态、TTS/STT 事件、IoT 命令等)
- 在代码里,接收回调主要分为:
- `OnData(...)`:
- 当 `binary` 为 `true` 时,认为是音频帧;设备会将其当作 Opus 数据进行解码。
- 当 `binary` 为 `false` 时,认为是 JSON 文本,需要在设备端用 cJSON 进行解析并做相应业务逻辑处理(见下文消息结构)。
- 当服务器或网络出现断连,回调 `OnDisconnected()` 被触发:
- 设备会调用 `on_audio_channel_closed_()`,并最终回到空闲状态。
6. **关闭 WebSocket 连接**
- 设备在需要结束语音会话时,会调用 `CloseAudioChannel()` 主动断开连接,并回到空闲状态。
- 或者如果服务器端主动断开,也会引发同样的回调流程。
---
## 2. 通用请求头
在建立 WebSocket 连接时,代码示例中设置了以下请求头:
- `Authorization`: 用于存放访问令牌,形如 `"Bearer <token>"`
- `Protocol-Version`: 固定示例中为 `"1"`
- `Device-Id`: 设备物理网卡 MAC 地址
- `Client-Id`: 设备 UUID(可在应用中唯一标识设备)
这些头会随着 WebSocket 握手一起发送到服务器,服务器可根据需求进行校验、认证等。
---
## 3. JSON 消息结构
WebSocket 文本帧以 JSON 方式传输,以下为常见的 `"type"` 字段及其对应业务逻辑。若消息里包含未列出的字段,可能为可选或特定实现细节。
### 3.1 客户端→服务器
1. **Hello**
- 连接成功后,由客户端发送,告知服务器基本参数。
- 例:
```json
{
"type": "hello",
"version": 1,
"transport": "websocket",
"audio_params": {
"format": "opus",
"sample_rate": 16000,
"channels": 1,
"frame_duration": 60
}
}
```
2. **Listen**
- 表示客户端开始或停止录音监听。
- 常见字段:
- `"session_id"`:会话标识
- `"type": "listen"`
- `"state"`:`"start"`, `"stop"`, `"detect"`(唤醒检测已触发)
- `"mode"`:`"auto"`, `"manual"` 或 `"realtime"`,表示识别模式。
- 例:开始监听
```json
{
"session_id": "xxx",
"type": "listen",
"state": "start",
"mode": "manual"
}
```
3. **Abort**
- 终止当前说话(TTS 播放)或语音通道。
- 例:
```json
{
"session_id": "xxx",
"type": "abort",
"reason": "wake_word_detected"
}
```
- `reason` 值可为 `"wake_word_detected"` 或其他。
4. **Wake Word Detected**
- 用于客户端向服务器告知检测到唤醒词。
- 例:
```json
{
"session_id": "xxx",
"type": "listen",
"state": "detect",
"text": "你好小明"
}
```
5. **IoT**
- 发送当前设备的物联网相关信息:
- **Descriptors**(描述设备功能、属性等)
- **States**(设备状态的实时更新)
- 例:
```json
{
"session_id": "xxx",
"type": "iot",
"descriptors": { ... }
}
```
或
```json
{
"session_id": "xxx",
"type": "iot",
"states": { ... }
}
```
---
### 3.2 服务器→客户端
1. **Hello**
- 服务器端返回的握手确认消息。
- 必须包含 `"type": "hello"` 和 `"transport": "websocket"`。
- 可能会带有 `audio_params`,表示服务器期望的音频参数,或与客户端对齐的配置。
- 成功接收后客户端会设置事件标志,表示 WebSocket 通道就绪。
2. **STT**
- `{"type": "stt", "text": "..."}`
- 表示服务器端识别到了用户语音。(例如语音转文本结果)
- 设备可能将此文本显示到屏幕上,后续再进入回答等流程。
3. **LLM**
- `{"type": "llm", "emotion": "happy", "text": "😀"}`
- 服务器指示设备调整表情动画 / UI 表达。
4. **TTS**
- `{"type": "tts", "state": "start"}`:服务器准备下发 TTS 音频,客户端进入 “speaking” 播放状态。
- `{"type": "tts", "state": "stop"}`:表示本次 TTS 结束。
- `{"type": "tts", "state": "sentence_start", "text": "..."}`
- 让设备在界面上显示当前要播放或朗读的文本片段(例如用于显示给用户)。
5. **IoT**
- `{"type": "iot", "commands": [ ... ]}`
- 服务器向设备发送物联网的动作指令,设备解析并执行(如打开灯、设置温度等)。
6. **音频数据:二进制帧**
- 当服务器发送音频二进制帧(Opus 编码)时,客户端解码并播放。
- 若客户端正在处于 “listening” (录音)状态,收到的音频帧会被忽略或清空以防冲突。
---
## 4. 音频编解码
1. **客户端发送录音数据**
- 音频输入经过可能的回声消除、降噪或音量增益后,通过 Opus 编码打包为二进制帧发送给服务器。
- 如果客户端每次编码生成的二进制帧大小为 N 字节,则会通过 WebSocket 的 **binary** 消息发送这块数据。
2. **客户端播放收到的音频**
- 收到服务器的二进制帧时,同样认定是 Opus 数据。
- 设备端会进行解码,然后交由音频输出接口播放。
- 如果服务器的音频采样率与设备不一致,会在解码后再进行重采样。
---
## 5. 常见状态流转
以下简述设备端关键状态流转,与 WebSocket 消息对应:
1. **Idle** → **Connecting**
- 用户触发或唤醒后,设备调用 `OpenAudioChannel()` → 建立 WebSocket 连接 → 发送 `"type":"hello"`。
2. **Connecting** → **Listening**
- 成功建立连接后,若继续执行 `SendStartListening(...)`,则进入录音状态。此时设备会持续编码麦克风数据并发送到服务器。
3. **Listening** → **Speaking**
- 收到服务器 TTS Start 消息 (`{"type":"tts","state":"start"}`) → 停止录音并播放接收到的音频。
4. **Speaking** → **Idle**
- 服务器 TTS Stop (`{"type":"tts","state":"stop"}`) → 音频播放结束。若未继续进入自动监听,则返回 Idle;如果配置了自动循环,则再度进入 Listening。
5. **Listening** / **Speaking** → **Idle**(遇到异常或主动中断)
- 调用 `SendAbortSpeaking(...)` 或 `CloseAudioChannel()` → 中断会话 → 关闭 WebSocket → 状态回到 Idle。
---
## 6. 错误处理
1. **连接失败**
- 如果 `Connect(url)` 返回失败或在等待服务器 “hello” 消息时超时,触发 `on_network_error_()` 回调。设备会提示“无法连接到服务”或类似错误信息。
2. **服务器断开**
- 如果 WebSocket 异常断开,回调 `OnDisconnected()`:
- 设备回调 `on_audio_channel_closed_()`
- 切换到 Idle 或其他重试逻辑。
---
## 7. 其它注意事项
1. **鉴权**
- 设备通过设置 `Authorization: Bearer <token>` 提供鉴权,服务器端需验证是否有效。
- 如果令牌过期或无效,服务器可拒绝握手或在后续断开。
2. **会话控制**
- 代码中部分消息包含 `session_id`,用于区分独立的对话或操作。服务端可根据需要对不同会话做分离处理,WebSocket 协议为空。
3. **音频负载**
- 代码里默认使用 Opus 格式,并设置 `sample_rate = 16000`,单声道。帧时长由 `OPUS_FRAME_DURATION_MS` 控制,一般为 60ms。可根据带宽或性能做适当调整。
4. **IoT 指令**
- `"type":"iot"` 的消息用户端代码对接 `thing_manager` 执行具体命令,因设备定制而不同。服务器端需确保下发格式与客户端保持一致。
5. **错误或异常 JSON**
- 当 JSON 中缺少必要字段,例如 `{"type": ...}`,客户端会记录错误日志(`ESP_LOGE(TAG, "Missing message type, data: %s", data);`),不会执行任何业务。
---
## 8. 消息示例
下面给出一个典型的双向消息示例(流程简化示意):
1. **客户端 → 服务器**(握手)
```json
{
"type": "hello",
"version": 1,
"transport": "websocket",
"audio_params": {
"format": "opus",
"sample_rate": 16000,
"channels": 1,
"frame_duration": 60
}
}
```
2. **服务器 → 客户端**(握手应答)
```json
{
"type": "hello",
"transport": "websocket",
"audio_params": {
"sample_rate": 16000
}
}
```
3. **客户端 → 服务器**(开始监听)
```json
{
"session_id": "",
"type": "listen",
"state": "start",
"mode": "auto"
}
```
同时客户端开始发送二进制帧(Opus 数据)。
4. **服务器 → 客户端**(ASR 结果)
```json
{
"type": "stt",
"text": "用户说的话"
}
```
5. **服务器 → 客户端**(TTS开始)
```json
{
"type": "tts",
"state": "start"
}
```
接着服务器发送二进制音频帧给客户端播放。
6. **服务器 → 客户端**(TTS结束)
```json
{
"type": "tts",
"state": "stop"
}
```
客户端停止播放音频,若无更多指令,则回到空闲状态。
---
## 9. 总结
本协议通过在 WebSocket 上层传输 JSON 文本与二进制音频帧,完成功能包括音频流上传、TTS 音频播放、语音识别与状态管理、IoT 指令下发等。其核心特征:
- **握手阶段**:发送 `"type":"hello"`,等待服务器返回。
- **音频通道**:采用 Opus 编码的二进制帧双向传输语音流。
- **JSON 消息**:使用 `"type"` 为核心字段标识不同业务逻辑,包括 TTS、STT、IoT、WakeWord 等。
- **扩展性**:可根据实际需求在 JSON 消息中添加字段,或在 headers 里进行额外鉴权。
服务器与客户端需提前约定各类消息的字段含义、时序逻辑以及错误处理规则,方能保证通信顺畅。上述信息可作为基础文档,便于后续对接、开发或扩展。 | {
"source": "78/xiaozhi-esp32",
"title": "docs/websocket.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/docs/websocket.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 7593
} |
# 编译配置命令
**配置编译目标为 ESP32:**
```bash
idf.py set-target esp32
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> AtomMatrix + Echo Base
```
**修改 flash 大小:**
```
Serial flasher config -> Flash size -> 4 MB
```
**修改分区表:**
```
Partition Table -> Custom partition CSV file -> partitions_4M.csv
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/atommatrix-echo-base/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/atommatrix-echo-base/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 387
} |
# 编译配置命令
**配置编译目标为 ESP32S3:**
```bash
idf.py set-target esp32s3
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> AtomS3 + Echo Base
```
**关闭语音唤醒:**
```
Xiaozhi Assistant -> [ ] 启用语音唤醒与音频处理 -> Unselect
```
**修改 flash 大小:**
```
Serial flasher config -> Flash size -> 8 MB
```
**修改分区表:**
```
Partition Table -> Custom partition CSV file -> partitions_8M.csv
```
**关闭片外 PSRAM:**
```
Component config -> ESP PSRAM -> [ ] Support for external, SPI-connected RAM -> Unselect
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/atoms3-echo-base/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/atoms3-echo-base/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 573
} |
# 编译配置命令
**配置编译目标为 ESP32S3:**
```bash
idf.py set-target esp32s3
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> AtomS3R + Echo Base
```
**修改 flash 大小:**
```
Serial flasher config -> Flash size -> 8 MB
```
**修改分区表:**
```
Partition Table -> Custom partition CSV file -> partitions_8M.csv
```
**修改 psram 配置:**
```
Component config -> ESP PSRAM -> SPI RAM config -> Mode (QUAD/OCT) -> Octal Mode PSRAM
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/atoms3r-echo-base/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/atoms3r-echo-base/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 544
} |
# 编译配置命令
**配置编译目标为 ESP32:**
```bash
idf.py set-target esp32
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> 面包板 ESP32 DevKit
```
**修改 flash 大小:**
```
Serial flasher config -> Flash size -> 4 MB
```
**修改分区表:**
```
Partition Table -> Custom partition CSV file -> partitions_4M.csv
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/bread-compact-esp32/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/bread-compact-esp32/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 381
} |
## 立创·实战派ESP32-C3开发板
1、开发板资料:https://wiki.lckfb.com/zh-hans/szpi-esp32c3
2、该开发板 flash 大小为 8MB,编译时注意选择合适的分区表:
```
Partition Table --->
Partition Table (Custom partition table CSV) --->
(partitions_8M.csv) Custom partition CSV file
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/lichuang-c3-dev/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/lichuang-c3-dev/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 242
} |
# 编译配置命令
**配置编译目标为 ESP32S3:**
```bash
idf.py set-target esp32s3
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> M5Stack CoreS3
```
**修改 psram 配置:**
```
Component config -> ESP PSRAM -> SPI RAM config -> Mode (QUAD/OCT) -> Quad Mode PSRAM
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/m5stack-core-s3/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/m5stack-core-s3/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 368
} |
# 编译配置命令
**配置编译目标为 ESP32S3:**
```bash
idf.py set-target esp32s3
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> Movecall Moji 小智AI衍生版
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/movecall-moji-esp32s3/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/movecall-moji-esp32s3/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 258
} |
# 编译配置命令
**配置编译目标为 ESP32S3:**
```bash
idf.py set-target esp32s3
```
**打开 menuconfig:**
```bash
idf.py menuconfig
```
**选择板子:**
```
Xiaozhi Assistant -> Board Type -> 太极小派esp32s3
```
**编译:**
```bash
idf.py build
``` | {
"source": "78/xiaozhi-esp32",
"title": "main/boards/taiji-pi-s3/README.md",
"url": "https://github.com/78/xiaozhi-esp32/blob/main/main/boards/taiji-pi-s3/README.md",
"date": "2024-08-31T10:08:16",
"stars": 6444,
"description": "Build your own AI friend",
"file_size": 222
} |

# Cofounder | Early alpha release
* project - [cofounder.openinterface.ai](https://cofounder.openinterface.ai)
* 👋 [@n_raidenai](https://x.com/n_raidenai)
**cofounder**
- full stack generative web apps ; backend + db + stateful web apps
- gen ui rooted in app architecture, with ai-guided mockup designer & modular design systems
---
# Early Alpha : Unstable ⚠️
The following points are very emphasized :
- This is an **EARLY, UNSTABLE, PREVIEW RELEASE** of the project. ⚠️ Until v1 is released, it is expected to break often.
- **It consumes a lot of tokens**. If you are on a tokens budget, wait until v1 is released.
- Again, this is an early, unstable release. A first test run. An early preview of the project's ideas. Far from completion. Open-source iterative development. Work in progress. Unstable early alpha release. [etc]
### **If any of these might be issues for you, even in the slightest way, wait until v1 is released ! Do not try the current release !**
To help you guide your decision on whether or not to try the current release , here is a guide
| Situation | Recommendation |
|------------------------------------------------------------------------------------------------------|------------------------|
| I'm not sure if this tool release is mature yet, maybe it will not work as intended and I may spend millions of tokens for nothing | Do not use it yet |
| I am very excited about this tool, I hope it is perfectly production-ready, because if it's not, I will make commentary about `I spent X amount on OpenAI API calls` | Do not use it yet |
| I am not interested in code. I want to type words into a box and have my project completed; I do not want messy broken unfinished code | Do not use it yet |
| I love exploring experimental tools but I am on the fence. It's going to break halfway and leave me sad | Do not use it yet |
| Who should even try it at this point? | Nobody. Do not use it yet |
| But I really want to use it for some esoteric reason having read all the above. | Do not use it yet either |
---
https://github.com/user-attachments/assets/cfd09250-d21e-49fc-a29b-fa0c661abfc0
https://github.com/user-attachments/assets/c055f9c4-6bc0-4b11-ba8f-cc9f149387fa
---
## Important
**Early alpha release ; earlier than expected by few weeks**
Still not merged with key target features of the project, notably :
- project iteration modules for all dimensions of generated projects
- admin interface for event streams and (deeper) project iterations
- integrate the full genUI plugin :
* generative design systems
* deploy finetuned models & serve from api.cofounder
- local, browser-based dev env for the entire project scope
- add { react-native , flutter , other web frameworks }
- validations & swarm code review and autofix
- code optimization
- [...]
be patient :)
---
# Usage
## Install & Init
* Open your terminal and run
```sh
npx @openinterface/cofounder
```
Follow the instructions. The installer
- will ask you for your keys
- setup dirs & start installs
- will start the local `cofounder/api` builder and server
- will open the web dashboard where you can create new projects (at `http://localhost:4200` ) 🎉
```
note :
you will be asked for a cofounder.openinterface.ai key
it is recommended to use one as it enables the designer/layoutv1 and swarm/external-apis features
and can be used without limits during the current early alpha period
the full index will be available for local download on v1 release
```
- currently using `node v22` for the whole project.
```sh
# alternatively, you can make a new project without going through the dashboard
# by runing :
npx @openinterface/cofounder -p "YourAppProjectName" -d "describe your app here" -a "(optional) design instructions"
```
## Run Generated Apps
- Your backend & vite+react web app will incrementally generate inside `./apps/{YourApp}`
Open your terminal in `./apps/{YourApp}` and run
```sh
npm i && npm run dev
```
It will start both the backend and vite+react, concurrently, after installing their dependencies
Go to `http://localhost:5173/` to open the web app 🎉
- From within the generated apps , you can use ⌘+K / Ctrl+K to iterate on UI components
[more details later]
## Notes
### Dashboard & Local API
If you resume later and would like to iterate on your generated apps,
the local `./cofounder/api` server needs to be running to receive queries
You can (re)start the `local cofounder API` running the following command from `./cofounder/api`
```sh
npm run start
```
The dashboard will open in `http://localhost:4200`
- note: You can also generate new apps from the same env, without the the dashboard, by running, from `./cofounder/api`, one of these commands
```sh
npm run start -- -p "ProjectName" -f "some app description" -a "minimalist and spacious , light theme"
npm run start -- -p "ProjectName" -f "./example_description.txt" -a "minimalist and spacious , light theme"
```
### Concurrency
**[the architecture will be further detailed and documented later]**
Every "node" in the `cofounder` architecture has a defined configuration under `./cofounder/api/system/structure/nodes/{category}/{name}.yaml` to handle things like concurrency, retries and limits per time interval
For example, if you want multiple LLM generations to run in parallel (when possible - sequences and parallels are defined in DAGS under `./cofounder/api/system/structure/sequences/{definition}.yaml` ),
go to
```yaml
#./cofounder/api/system/structure/nodes/op/llm.yaml
nodes:
op:LLM::GEN:
desc: "..."
in: [model, messages, preparser, parser, query, stream]
out: [generated, usage]
queue:
concurrency: 1 # <------------------------------- here
op:LLM::VECTORIZE:
desc: "{texts} -> {vectors}"
in: [texts]
out: [vectors, usage]
mapreduce: true
op:LLM::VECTORIZE:CHUNK:
desc: "{texts} -> {vectors}"
in: [texts]
out: [vectors, usage]
queue:
concurrency: 50
```
and change the `op:LLM::GEN` parameter `concurrency` to a higher value
The default LLM concurrency is set to `2` so you can see what's happening in your console streams step by step - but you can increment it depending on your api keys limits
---
# Changelog
---
# Roadmap
---
# Benchmarks
---
# Community & Links
- [](https://discord.gg/2kVMzeASj9) | Community discord server by @flamecoders
---
# Docs, Design Systems, ...
**[WIP]**
---
# Architecture
[more details later]
archi/v1 is as follows :

---
# Credits
- Demo design systems built using Figma renders / UI kits from:
* blocks.pm by Hexa Plugin (see `cofounder/api/system/presets`)
* google material
* figma core
* shadcn
- Dashboard node-based ui powered by [react flow](https://reactflow.dev/) | {
"source": "raidendotai/cofounder",
"title": "README.md",
"url": "https://github.com/raidendotai/cofounder/blob/main/README.md",
"date": "2024-09-19T00:27:55",
"stars": 6271,
"description": "ai-generated apps , full stack + generative UI",
"file_size": 7284
} |
A non-ordered roadmap & todo dump
will update with proper map later, ignore for now
---
## nearest
merge with browser-based local dev env using webcontainers ; console.cofounder.openinterface.ai
## validation, errorfix
post-generation validation swarm modules
swarm autofix modules, merge
babel parse
## build, deploy
vite plugin to generate web app without the cofounder modules
generate packed projects ready for deployment
automate deployments, integrate different services
## design, layouts
plug in advanced designer + models
document how to custom design systems
add & index docs for shiny design systems
fonts / css modules
release extensive cofounder index on api access for layout designer to use
separate {desktop,mobile} in designer
RAG on icons via {text/clip} (like in openv0), maybe as a vite plugin
## functional
deploy latest index checkpoint in api.cofounder
## mgmt
more iteration modules, sequences to handle full lifecycle of generated projects
document everything
## platforms
react-native / flutter
more frontend frameworks
## project
technical articles
model train & serve from api
admin panel à la coolify
## also
SEO stuff
analytics into the dev feedback
animation (ie. framer-motion)
functional, api, python support api-side
benchmarks | {
"source": "raidendotai/cofounder",
"title": "TODO.md",
"url": "https://github.com/raidendotai/cofounder/blob/main/TODO.md",
"date": "2024-09-19T00:27:55",
"stars": 6271,
"description": "ai-generated apps , full stack + generative UI",
"file_size": 1271
} |
## How to start apps
Your backend & vite+react web app will incrementally generate inside `./apps/{YourApp}`
Open your terminal in `./apps/{YourApp}` and run
```sh
npm i && npm run dev
``` | {
"source": "raidendotai/cofounder",
"title": "apps/README.md",
"url": "https://github.com/raidendotai/cofounder/blob/main/apps/README.md",
"date": "2024-09-19T00:27:55",
"stars": 6271,
"description": "ai-generated apps , full stack + generative UI",
"file_size": 190
} |
app generated from cofounder/boilerplate
instructions here on how to start api and frontend , whether in parallel or separately | {
"source": "raidendotai/cofounder",
"title": "cofounder/boilerplate/README.md",
"url": "https://github.com/raidendotai/cofounder/blob/main/cofounder/boilerplate/README.md",
"date": "2024-09-19T00:27:55",
"stars": 6271,
"description": "ai-generated apps , full stack + generative UI",
"file_size": 128
} |
# React + TypeScript + Vite
This template provides a minimal setup to get React working in Vite with HMR and some ESLint rules.
Currently, two official plugins are available:
- [@vitejs/plugin-react](https://github.com/vitejs/vite-plugin-react/blob/main/packages/plugin-react/README.md) uses [Babel](https://babeljs.io/) for Fast Refresh
- [@vitejs/plugin-react-swc](https://github.com/vitejs/vite-plugin-react-swc) uses [SWC](https://swc.rs/) for Fast Refresh
## Expanding the ESLint configuration
If you are developing a production application, we recommend updating the configuration to enable type aware lint rules:
- Configure the top-level `parserOptions` property like this:
```js
export default tseslint.config({
languageOptions: {
// other options...
parserOptions: {
project: ["./tsconfig.node.json", "./tsconfig.app.json"],
tsconfigRootDir: import.meta.dirname,
},
},
});
```
- Replace `tseslint.configs.recommended` to `tseslint.configs.recommendedTypeChecked` or `tseslint.configs.strictTypeChecked`
- Optionally add `...tseslint.configs.stylisticTypeChecked`
- Install [eslint-plugin-react](https://github.com/jsx-eslint/eslint-plugin-react) and update the config:
```js
// eslint.config.js
import react from "eslint-plugin-react";
export default tseslint.config({
// Set the react version
settings: { react: { version: "18.3" } },
plugins: {
// Add the react plugin
react,
},
rules: {
// other rules...
// Enable its recommended rules
...react.configs.recommended.rules,
...react.configs["jsx-runtime"].rules,
},
});
``` | {
"source": "raidendotai/cofounder",
"title": "cofounder/boilerplate/vitereact-boilerplate/README.md",
"url": "https://github.com/raidendotai/cofounder/blob/main/cofounder/boilerplate/vitereact-boilerplate/README.md",
"date": "2024-09-19T00:27:55",
"stars": 6271,
"description": "ai-generated apps , full stack + generative UI",
"file_size": 1577
} |
## Notes
This is a demo design system and will be replaced on official post-alpha release
## Credits
Renders dumped from Figma Presets:
- Blocks.pm by Hexa Plugin
- Google Material & Figma Core Design systems | {
"source": "raidendotai/cofounder",
"title": "cofounder/api/system/presets/ui/design/systems/protoboy-v1/README.md",
"url": "https://github.com/raidendotai/cofounder/blob/main/cofounder/api/system/presets/ui/design/systems/protoboy-v1/README.md",
"date": "2024-09-19T00:27:55",
"stars": 6271,
"description": "ai-generated apps , full stack + generative UI",
"file_size": 212
} |
<div align="center">
<img src="app-icon.png" alt="NeoHtop Logo" width="120" />
<h1>NeoHtop</h1>
<p>A modern, cross-platform system monitor built on top of Svelte, Rust, and Tauri.</p>
[](https://github.com/Abdenasser/neohtop/blob/main/LICENSE)
[](https://github.com/Abdenasser/neohtop/stargazers)
[](https://github.com/Abdenasser/neohtop/issues)
[](https://github.com/Abdenasser/neohtop/releases)
[](https://developer.apple.com/documentation/security/notarizing-macos-software-before-distribution)
</div>
<div align="center">
<picture>
<!-- <source media="(prefers-color-scheme: dark)" srcset="screenshot.png">
<source media="(prefers-color-scheme: light)" srcset="screenshot-light.png"> -->
<img alt="NeoHtop Screenshot" src="./screenshot.png" width="800">
</picture>
</div>
<div align="center">
<p>If you find this project helpful, consider buying me a coffee:</p>
<a href="https://www.buymeacoffee.com/abdenasser" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
<p>Or sponsor me on GitHub:</p>
<a href="https://github.com/sponsors/Abdenasser" target="_blank"><img src="https://img.shields.io/badge/Sponsor-abdenasser-white?style=flat&logo=github&logoColor=pink" alt="Sponsor @abdenasser" style="height: auto !important;width: 217px !important;"></a>
</div>
## Table of Contents
- [Why NeoHtop?](#why-neohtop)
- [Features](#features)
- [Tech Stack](#tech-stack)
- [Getting Started](#getting-started)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Running with Sudo](#running-with-sudo)
- [Development](#development)
- [Setup](#setup)
- [Code Formatting](#code-formatting)
- [Pull Requests](#pull-requests)
- [Contributing](#contributing)
- [License](#license)
## Why NeoHtop?
[Read about the back story and motivation behind NeoHtop](https://www.abdenasser.com/2024/11/06/oh-boy-neohtop/)
## Features
- 🚀 Real-time process monitoring
- 💻 CPU and Memory usage tracking
- 🎨 Beautiful, modern UI with dark/light themes
- 🔍 Advanced process search and filtering
- 📌 Pin important processes
- 🛠 Process management (kill processes)
- 🎯 Sort by any column
- 🔄 Auto-refresh system stats
### Search Functionality
Search for processes by name, command, or PID. Use commas to search for multiple terms simultaneously. Regular expressions are supported for advanced filtering.
Examples:
- `arm, x86`: Returns processes with "arm" or "x86" in the name or command
- `d$`: Lists daemons (processes ending with 'd')
- `^(\w+\.)+\w+$`: Shows processes with reverse domain name notation (e.g., com.docker.vmnetd)
## Tech Stack
- **Frontend**: SvelteKit, TypeScript
- **Backend**: Rust, Tauri
- **Styling**: CSS Variables for theming
- **Icons**: FontAwesome
## Getting Started
### Prerequisites
- Node.js (v16 or later)
- Rust (latest stable)
- Xcode Command Line Tools (for macOS)
### Installation
Download the latest release from the [releases page](https://github.com/Abdenasser/neohtop/releases).
### Running with Sudo
Some processes require monitoring with sudo privileges. To monitor these processes, launch NeoHtop with sudo:
- macOS: `sudo /Applications/NeoHtop.app/Contents/MacOS/NeoHtop`
- Linux: `pkexec /path/to/neohtop` (recommended)
## Development
### Setup
```bash
# Install dependencies
npm install
# Run in development mode
npm run tauri dev
# Build for production
npm run tauri build
```
### Code Formatting
We use Prettier for web code and `cargo fmt` for Rust code.
```bash
# Format all files
npm run format
# Check formatting without making changes
npm run format:check
```
### Pull Requests
Before submitting a PR, ensure:
1. All code is formatted (`npm run format`)
2. The format check passes (`npm run format:check`)
3. Your commits follow the project's commit message conventions
## Contributing
We welcome contributions! Please see our [contributing guidelines](./.github/CONTRIBUTING.md) for more information.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. | {
"source": "Abdenasser/neohtop",
"title": "README.md",
"url": "https://github.com/Abdenasser/neohtop/blob/main/README.md",
"date": "2024-10-30T20:44:11",
"stars": 5821,
"description": "💪🏻 Blazing-fast system monitoring for your desktop (built with Rust, Tauri & Svelte)",
"file_size": 4566
} |
# Contributing to NeoHtop
Thank you for considering contributing to NeoHtop! We welcome contributions from the community.
## How to Contribute
1. Fork the repository.
2. Create a new branch (`git checkout -b feature/YourFeature`).
3. Make your changes.
4. Commit your changes (`git commit -m 'Add some feature'`).
5. Push to the branch (`git push origin feature/YourFeature`).
6. Open a pull request.
## Code of Conduct
Please note that this project is released with a [Contributor Code of Conduct](https://www.contributor-covenant.org/version/2/0/code_of_conduct/). By participating in this project you agree to abide by its terms.
## Reporting Bugs
Please use the [bug report template](./ISSUE_TEMPLATE/bug_report.md) to report any bugs you find.
## Requesting Features
Please use the [feature request template](./ISSUE_TEMPLATE/feature_request.md) to suggest new features. | {
"source": "Abdenasser/neohtop",
"title": ".github/CONTRIBUTING.md",
"url": "https://github.com/Abdenasser/neohtop/blob/main/.github/CONTRIBUTING.md",
"date": "2024-10-30T20:44:11",
"stars": 5821,
"description": "💪🏻 Blazing-fast system monitoring for your desktop (built with Rust, Tauri & Svelte)",
"file_size": 884
} |
## Description
Please include a summary of the changes and the related issue. Please also include relevant motivation and context.
Fixes # (issue)
## Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
## How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration.
- [ ] Test A
- [ ] Test B
## Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules | {
"source": "Abdenasser/neohtop",
"title": ".github/pull_request_template.md",
"url": "https://github.com/Abdenasser/neohtop/blob/main/.github/pull_request_template.md",
"date": "2024-10-30T20:44:11",
"stars": 5821,
"description": "💪🏻 Blazing-fast system monitoring for your desktop (built with Rust, Tauri & Svelte)",
"file_size": 1243
} |
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. macOS]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here. | {
"source": "Abdenasser/neohtop",
"title": ".github/ISSUE_TEMPLATE/bug_report.md",
"url": "https://github.com/Abdenasser/neohtop/blob/main/.github/ISSUE_TEMPLATE/bug_report.md",
"date": "2024-10-30T20:44:11",
"stars": 5821,
"description": "💪🏻 Blazing-fast system monitoring for your desktop (built with Rust, Tauri & Svelte)",
"file_size": 634
} |
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here. | {
"source": "Abdenasser/neohtop",
"title": ".github/ISSUE_TEMPLATE/feature_request.md",
"url": "https://github.com/Abdenasser/neohtop/blob/main/.github/ISSUE_TEMPLATE/feature_request.md",
"date": "2024-10-30T20:44:11",
"stars": 5821,
"description": "💪🏻 Blazing-fast system monitoring for your desktop (built with Rust, Tauri & Svelte)",
"file_size": 603
} |
<div align="center">
<h1>s1: Simple test-time scaling</h1>
<p>Minimal recipe for test-time scaling and strong reasoning performance matching o1-preview with just 1,000 examples & budget forcing
</p>
</div>
<br>

****************************************************************
**Updates:**
* 2025-02: We released [s1.1](https://huggingface.co/simplescaling/s1.1-32B) a better model than s1 by reusing the same s1K questions but with reasoning traces generated by r1 instead of Gemini: [s1K-1.1](https://huggingface.co/datasets/simplescaling/s1K-1.1). Check [this tweet](https://x.com/Muennighoff/status/1889310803746246694) for details
* 2025-01: We released [our paper](https://arxiv.org/abs/2501.19393) announced via [this tweet](https://x.com/Muennighoff/status/1886405528777073134).
****************************************************************
This repository provides an overview of all resources for the paper ["s1: Simple test-time scaling"](https://arxiv.org/abs/2501.19393).
- [Artifacts](#artifacts)
- [Structure](#structure)
- [Inference](#inference)
- [vLLM](#vllm)
- [vLLM with budget forcing](#vllm-with-budget-forcing)
- [transformers](#transformers)
- [Training](#training)
- [Evaluation](#evaluation)
- [Data](#data)
- [Visuals](#visuals)
- [Known Issues](#known-issues)
- [Citation](#citation)
### Artifacts
- **Paper**: https://arxiv.org/abs/2501.19393
- **Model**: https://hf.co/simplescaling/s1-32B
- **Data**: https://hf.co/datasets/simplescaling/s1K
- s1-prob: https://hf.co/datasets/simplescaling/s1-prob
- s1-teasers: https://hf.co/datasets/simplescaling/s1-teasers
- Full 59K: https://hf.co/datasets/simplescaling/data_ablation_full59K
### Structure
- `eval/`: Evaluation scripts
- `data/`: Synthetic data creation scripts & co
- `train/`: Training scripts
### Inference
#### vLLM
Install the `vllm` library and run:
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model = LLM(
"simplescaling/s1.1-32B",
tensor_parallel_size=2,
)
tok = AutoTokenizer.from_pretrained("simplescaling/s1-32B")
stop_token_ids = tok("<|im_end|>")["input_ids"]
sampling_params = SamplingParams(
max_tokens=32768,
min_tokens=0,
stop_token_ids=stop_token_ids,
)
prompt = "How many r in raspberry"
prompt = "<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n" + prompt + "<|im_end|>\n<|im_start|>assistant\n"
o = model.generate(prompt, sampling_params=sampling_params)
print(o[0].outputs[0].text)
```
#### vLLM with budget forcing
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
# Decide on a token limit for thinking; As the model's max tokens is 32768, 32000 usually ensures there is enough space for the model to still answer
MAX_TOKENS_THINKING = 32000
# Decide how often to ignore end-of-thinking token
NUM_IGNORE = 1
model = LLM(
"simplescaling/s1-32B", # s1 originally gets this prompt wrong but with budget forcing it fixes it
tensor_parallel_size=2,
)
tok = AutoTokenizer.from_pretrained(
"simplescaling/s1-32B"
)
stop_token_ids = tok("<|im_end|>")["input_ids"]
sampling_params = SamplingParams(
max_tokens=32768,
min_tokens=0,
stop_token_ids=stop_token_ids,
skip_special_tokens=False,
temperature=0.0,
)
# For the exact raspberry sample in the paper see
prompts = [
"How many r in raspberry",
]
for i, p in enumerate(prompts):
prompt = "<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\n" + p + "<|im_end|>\n<|im_start|>assistant\n"
stop_token_ids = tok("<|im_start|><|im_end|>")["input_ids"]
sampling_params = SamplingParams(
max_tokens=MAX_TOKENS_THINKING,
min_tokens=0,
stop_token_ids=stop_token_ids,
skip_special_tokens=False,
temperature=0.0,
)
prompt += "<|im_start|>think"
o = model.generate(
prompt,
sampling_params=sampling_params
)
ignore_str = "Wait"
max_tokens_thinking_tmp = MAX_TOKENS_THINKING
if max_tokens_thinking_tmp > 0:
for i in range(NUM_IGNORE): # Num of times to skip stop token
max_tokens_thinking_tmp -= len(o[0].outputs[0].token_ids)
prompt += o[0].outputs[0].text + ignore_str
sampling_params = SamplingParams(
max_tokens=max_tokens_thinking_tmp,
min_tokens=1,
stop_token_ids=stop_token_ids,
skip_special_tokens=False,
temperature=0.0,
)
o = model.generate(
prompt,
sampling_params=sampling_params
)
### Final answer ###
prompt += o[0].outputs[0].text # You can also append "Final Answer:" here like we do for some evaluations to prevent the model from just continuing to reason in its answer when early exiting
stop_token_ids = tok("<|im_end|>")["input_ids"]
sampling_params = SamplingParams(
max_tokens=32768,
min_tokens=0,
stop_token_ids=stop_token_ids,
skip_special_tokens=False,
temperature=0.0,
)
o = model.generate(
prompt,
sampling_params=sampling_params,
)
print("With budget forcing:") # You will see that after the "Wait" in the reasoning trace it fixes its answer
print(prompt + o[0].outputs[0].text)
```
#### transformers
Install the `transformers` & `torch` libraries and run:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "simplescaling/s1.1-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "How many r in raspberry"
messages = [
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### Training
To run training, you can find our script at `train/sft.py` which you can invoke via one of the `train/sft*sh` scripts which in turn you can launch via `train/launch.sh` if you are on a SLURM cluster (requires editing the file for your cluster setup).
To train s1-32B/s1.1-32B, we recommend 16 H100 GPUs i.e. 2 nodes with 8 each. For s1.1, we set the block size to 20000 to avoid OOM (https://github.com/simplescaling/s1/blob/0ad4b3de32507b4aa0d4be28f336276ee99b2315/train/sft.sh#L17); Check the wandb logs [here](https://wandb.ai/hashimoto-group/o1/runs/m1ilia77/overview).
Quick start:
```
git clone https://github.com/simplescaling/s1.git
cd s1
pip3 install -r requirements.txt
bash train/sft.sh
```
*Note: If you encounter an out-of-memory (OOM) issue with 8 GPUs, consider enabling gradient checkpointing by adding the following line to your script: `--gradient_checkpointing=True`.*
### Evaluation
We cloned [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) at commit `4cec66e4e468d15789473d6d63c3a61a751fa524` and modified it. Setup:
```bash
cd eval/lm-evaluation-harness
pip install -e .[math,vllm]
```
All commands are in `eval/commands.sh`. For AIME24 we always pick the `aime24_nofigures` result, which uses a dataset that only contains the AIME24 figures if they are important for the task.
If you want to compute statistics (avg thinking tokens etc) for an evaluation run you can use
`python eval/compute_sample_stats.py path_to_samples_file.jsonl`
All our evaluation result files are at: https://hf.co/datasets/simplescaling/results
To run REBASE: commands are in `eval/rebase/run.sh`
Note that for the evaluations in the Discussion section with REBASE we used https://huggingface.co/simplescaling/step-conditional-control-old trained on an older version of our dataset https://huggingface.co/datasets/simplescaling/s1K-step-conditional-control-old and run on an older version of our evaluation using https://huggingface.co/datasets/Maxwell-Jia/AIME_2024.
### Data
To recreate s1K follow the steps below. In various files you will have to rename the organizations `simplescaling` and `qfq` with an organization that you own. **Note that [s1K-1.1](https://huggingface.co/datasets/simplescaling/s1K-1.1) is a better dataset generated with r1 traces instead of Gemini traces.**
1. Run `data/collect_data.py` followed by `data/fix_gpqa.py` & `data/add_aime.py` to collect the questions; Make sure to change the hub path in the respective files to one of your own.
2. Generate traces with Gemini via `python data/gemini.py`. This step will use https://hf.co/datasets/qfq/train which should be roughly equivalent to the dataet you have produced in 1.
3. Generate answers with Qwen via `python data/bulk_inference.py` that can be launched with `data/bulk_inference.sh`.
4. Add features by running `python data/featurization.py`.
5. Run final filtering via going through `data/filter.ipynb`.
### Visuals
All figures and some tables are created via [this colab](https://colab.research.google.com/drive/1GAfwbJs2Y1dgGGsxrQyQg2G7CRH5NgN3?usp=sharing) equivalent to `visuals/visuals.ipynb`. Some are subsequently edited via the `visuals/s1.fig` file, which you can load in Figma.
### Known Issues
- vLLM throws `ValueError: Token id XXXXX is out of vocabulary`
- This can happen with budget forcing, especially when running with temperature 1, where the model will sometimes do crazy stuff and predict a vocab id that is larger than its max token id but still within its embedding size i.e. anything <152064, >151664; When we refeed the model's previous outputs to it which is done when setting e.g. max_thinking_tokens in the evaluation then this will cause the error cuz vLLM does this check even though it would only be an issue for IDs >152064. To fix it you can just uncomment the vLLM ValueError (It is the line `if max_input_id > tokenizer.max_token_id:` in `vllm/engine/llm_engine.py`)
### Citation
```bibtex
@misc{muennighoff2025s1simpletesttimescaling,
title={s1: Simple test-time scaling},
author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candès and Tatsunori Hashimoto},
year={2025},
eprint={2501.19393},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.19393},
}
``` | {
"source": "simplescaling/s1",
"title": "README.md",
"url": "https://github.com/simplescaling/s1/blob/main/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 11028
} |
MIT License
Copyright (c) 2020 EleutherAI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/LICENSE.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/LICENSE.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 1066
} |
# Language Model Evaluation Harness
[](https://doi.org/10.5281/zenodo.10256836)
---
*Latest News 📣*
- [2024/09] We are prototyping allowing users of LM Evaluation Harness to create and evaluate on text+image multimodal input, text output tasks, and have just added the `hf-multimodal` and `vllm-vlm` model types and `mmmu` task as a prototype feature. We welcome users to try out this in-progress feature and stress-test it for themselves, and suggest they check out [`lmms-eval`](https://github.com/EvolvingLMMs-Lab/lmms-eval), a wonderful project originally forking off of the lm-evaluation-harness, for a broader range of multimodal tasks, models, and features.
- [2024/07] [API model](docs/API_guide.md) support has been updated and refactored, introducing support for batched and async requests, and making it significantly easier to customize and use for your own purposes. **To run Llama 405B, we recommend using VLLM's OpenAI-compliant API to host the model, and use the `local-completions` model type to evaluate the model.**
- [2024/07] New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/README.md) task group.
---
## Announcement
**A new v0.4.0 release of lm-evaluation-harness is available** !
New updates and features include:
- **New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/README.md) task group.**
- Internal refactoring
- Config-based task creation and configuration
- Easier import and sharing of externally-defined task config YAMLs
- Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource
- More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more
- Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more
- Logging and usability changes
- New tasks including CoT BIG-Bench-Hard, Belebele, user-defined task groupings, and more
Please see our updated documentation pages in `docs/` for more details.
Development will be continuing on the `main` branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the [EleutherAI discord](https://discord.gg/eleutherai)!
---
## Overview
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
**Features:**
- Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.
- Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm).
- Support for commercial APIs including [OpenAI](https://openai.com), and [TextSynth](https://textsynth.com/).
- Support for evaluation on adapters (e.g. LoRA) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
- Support for local models and benchmarks.
- Evaluation with publicly available prompts ensures reproducibility and comparability between papers.
- Easy support for custom prompts and evaluation metrics.
The Language Model Evaluation Harness is the backend for 🤗 Hugging Face's popular [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), has been used in [hundreds of papers](https://scholar.google.com/scholar?oi=bibs&hl=en&authuser=2&cites=15052937328817631261,4097184744846514103,1520777361382155671,17476825572045927382,18443729326628441434,14801318227356878622,7890865700763267262,12854182577605049984,15641002901115500560,5104500764547628290), and is used internally by dozens of organizations including NVIDIA, Cohere, BigScience, BigCode, Nous Research, and Mosaic ML.
## Install
To install the `lm-eval` package from the github repository, run:
```bash
git clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
We also provide a number of optional dependencies for extended functionality. A detailed table is available at the end of this document.
## Basic Usage
### User Guide
A user guide detailing the full list of supported arguments is provided [here](./docs/interface.md), and on the terminal by calling `lm_eval -h`. Alternatively, you can use `lm-eval` instead of `lm_eval`.
A list of supported tasks (or groupings of tasks) can be viewed with `lm-eval --tasks list`. Task descriptions and links to corresponding subfolders are provided [here](./lm_eval/tasks/README.md).
### Hugging Face `transformers`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command (this assumes you are using a CUDA-compatible GPU):
```bash
lm_eval --model hf \
--model_args pretrained=EleutherAI/gpt-j-6B \
--tasks hellaswag \
--device cuda:0 \
--batch_size 8
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
lm_eval --model hf \
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
--tasks lambada_openai,hellaswag \
--device cuda:0 \
--batch_size 8
```
Models that are loaded via both `transformers.AutoModelForCausalLM` (autoregressive, decoder-only GPT style models) and `transformers.AutoModelForSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface are supported.
Batch size selection can be automated by setting the ```--batch_size``` flag to ```auto```. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append ```:N``` to above flag to automatically recompute the largest batch size ```N``` times. For example, to recompute the batch size 4 times, the command would be:
```bash
lm_eval --model hf \
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
--tasks lambada_openai,hellaswag \
--device cuda:0 \
--batch_size auto:4
```
> [!Note]
> Just like you can provide a local path to `transformers.AutoModel`, you can also provide a local path to `lm_eval` via `--model_args pretrained=/path/to/model`
#### Multi-GPU Evaluation with Hugging Face `accelerate`
We support three main ways of using Hugging Face's [accelerate 🚀](https://github.com/huggingface/accelerate) library for multi-GPU evaluation.
To perform *data-parallel evaluation* (where each GPU loads a **separate full copy** of the model), we leverage the `accelerate` launcher as follows:
```
accelerate launch -m lm_eval --model hf \
--tasks lambada_openai,arc_easy \
--batch_size 16
```
(or via `accelerate launch --no-python lm_eval`).
For cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one.
**WARNING**: This setup does not work with FSDP model sharding, so in `accelerate config` FSDP must be disabled, or the NO_SHARD FSDP option must be used.
The second way of using `accelerate` for multi-GPU evaluation is when your model is *too large to fit on a single GPU.*
In this setting, run the library *outside the `accelerate` launcher*, but passing `parallelize=True` to `--model_args` as follows:
```
lm_eval --model hf \
--tasks lambada_openai,arc_easy \
--model_args parallelize=True \
--batch_size 16
```
This means that your model's weights will be split across all available GPUs.
For more advanced users or even larger models, we allow for the following arguments when `parallelize=True` as well:
- `device_map_option`: How to split model weights across available GPUs. defaults to "auto".
- `max_memory_per_gpu`: the max GPU memory to use per GPU in loading the model.
- `max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM.
- `offload_folder`: a folder where model weights will be offloaded to disk if needed.
The third option is to use both at the same time. This will allow you to take advantage of both data parallelism and model sharding, and is especially useful for models that are too large to fit on a single GPU.
```
accelerate launch --multi_gpu --num_processes {nb_of_copies_of_your_model} \
-m lm_eval --model hf \
--tasks lambada_openai,arc_easy \
--model_args parallelize=True \
--batch_size 16
```
To learn more about model parallelism and how to use it with the `accelerate` library, see the [accelerate documentation](https://huggingface.co/docs/transformers/v4.15.0/en/parallelism)
**Warning: We do not natively support multi-node evaluation using the `hf` model type! Please reference [our GPT-NeoX library integration](https://github.com/EleutherAI/gpt-neox/blob/main/eval.py) for an example of code in which a custom multi-machine evaluation script is written.**
**Note: we do not currently support multi-node evaluations natively, and advise using either an externally hosted server to run inference requests against, or creating a custom integration with your distributed framework [as is done for the GPT-NeoX library](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py).**
### NVIDIA `nemo` models
[NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo) is a generative AI framework built for researchers and pytorch developers working on language models.
To evaluate a `nemo` model, start by installing NeMo following [the documentation](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#installation). We highly recommended to use the NVIDIA PyTorch or NeMo container, especially if having issues installing Apex or any other dependencies (see [latest released containers](https://github.com/NVIDIA/NeMo/releases)). Please also install the lm evaluation harness library following the instructions in [the Install section](https://github.com/EleutherAI/lm-evaluation-harness/tree/main?tab=readme-ov-file#install).
NeMo models can be obtained through [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/models) or in [NVIDIA's Hugging Face page](https://huggingface.co/nvidia). In [NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo/tree/main/scripts/nlp_language_modeling) there are conversion scripts to convert the `hf` checkpoints of popular models like llama, falcon, mixtral or mpt to `nemo`.
Run a `nemo` model on one GPU:
```bash
lm_eval --model nemo_lm \
--model_args path=<path_to_nemo_model> \
--tasks hellaswag \
--batch_size 32
```
It is recommended to unpack the `nemo` model to avoid the unpacking inside the docker container - it may overflow disk space. For that you can run:
```
mkdir MY_MODEL
tar -xvf MY_MODEL.nemo -c MY_MODEL
```
#### Multi-GPU evaluation with NVIDIA `nemo` models
By default, only one GPU is used. But we do support either data replication or tensor/pipeline parallelism during evaluation, on one node.
1) To enable data replication, set the `model_args` of `devices` to the number of data replicas to run. For example, the command to run 8 data replicas over 8 GPUs is:
```bash
torchrun --nproc-per-node=8 --no-python lm_eval \
--model nemo_lm \
--model_args path=<path_to_nemo_model>,devices=8 \
--tasks hellaswag \
--batch_size 32
```
2) To enable tensor and/or pipeline parallelism, set the `model_args` of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. In addition, you also have to set up `devices` to be equal to the product of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. For example, the command to use one node of 4 GPUs with tensor parallelism of 2 and pipeline parallelism of 2 is:
```bash
torchrun --nproc-per-node=4 --no-python lm_eval \
--model nemo_lm \
--model_args path=<path_to_nemo_model>,devices=4,tensor_model_parallel_size=2,pipeline_model_parallel_size=2 \
--tasks hellaswag \
--batch_size 32
```
Note that it is recommended to substitute the `python` command by `torchrun --nproc-per-node=<number of devices> --no-python` to facilitate loading the model into the GPUs. This is especially important for large checkpoints loaded into multiple GPUs.
Not supported yet: multi-node evaluation and combinations of data replication with tensor or pipeline parallelism.
### Tensor + Data Parallel and Optimized Inference with `vLLM`
We also support vLLM for faster inference on [supported model types](https://docs.vllm.ai/en/latest/models/supported_models.html), especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU — tensor parallel, data parallel, or a combination of both — inference, for example:
```bash
lm_eval --model vllm \
--model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \
--tasks lambada_openai \
--batch_size auto
```
To use vllm, do `pip install lm_eval[vllm]`. For a full list of supported vLLM configurations, please reference our [vLLM integration](https://github.com/EleutherAI/lm-evaluation-harness/blob/e74ec966556253fbe3d8ecba9de675c77c075bce/lm_eval/models/vllm_causallms.py) and the vLLM documentation.
vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF.
> [!Tip]
> For fastest performance, we recommend using `--batch_size auto` for vLLM whenever possible, to leverage its continuous batching functionality!
> [!Tip]
> Passing `max_model_len=4096` or some other reasonable default to vLLM through model args may cause speedups or prevent out-of-memory errors when trying to use auto batch size, such as for Mistral-7B-v0.1 which defaults to a maximum length of 32k.
### Model APIs and Inference Servers
Our library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.
To call a hosted model, use:
```bash
export OPENAI_API_KEY=YOUR_KEY_HERE
lm_eval --model openai-completions \
--model_args model=davinci \
--tasks lambada_openai,hellaswag
```
We also support using your own local inference server with servers that mirror the OpenAI Completions and ChatCompletions APIs.
```bash
lm_eval --model local-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1/completions,num_concurrent=1,max_retries=3,tokenized_requests=False,batch_size=16
```
Note that for externally hosted models, configs such as `--device` which relate to where to place a local model should not be used and do not function. Just like you can use `--model_args` to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support.
| API or Inference Server | Implemented? | `--model <xxx>` name | Models supported: | Request Types: |
|---------------------------------------------------------------------------------------------------------------------------|---------------------------------|-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------|
| OpenAI Completions | :heavy_check_mark: | `openai-completions`, `local-completions` | All OpenAI Completions API models | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| OpenAI ChatCompletions | :heavy_check_mark: | `openai-chat-completions`, `local-chat-completions` | [All ChatCompletions API models](https://platform.openai.com/docs/guides/gpt) | `generate_until` (no logprobs) |
| Anthropic | :heavy_check_mark: | `anthropic` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/reference/selecting-a-model) | `generate_until` (no logprobs) |
| Anthropic Chat | :heavy_check_mark: | `anthropic-chat`, `anthropic-chat-completions` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/docs/models-overview) | `generate_until` (no logprobs) |
| Textsynth | :heavy_check_mark: | `textsynth` | [All supported engines](https://textsynth.com/documentation.html#engines) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Cohere | [:hourglass: - blocked on Cohere API bug](https://github.com/EleutherAI/lm-evaluation-harness/pull/395) | N/A | [All `cohere.generate()` engines](https://docs.cohere.com/docs/models) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| [Llama.cpp](https://github.com/ggerganov/llama.cpp) (via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) | :heavy_check_mark: | `gguf`, `ggml` | [All models supported by llama.cpp](https://github.com/ggerganov/llama.cpp) | `generate_until`, `loglikelihood`, (perplexity evaluation not yet implemented) |
| vLLM | :heavy_check_mark: | `vllm` | [Most HF Causal Language Models](https://docs.vllm.ai/en/latest/models/supported_models.html) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Mamba | :heavy_check_mark: | `mamba_ssm` | [Mamba architecture Language Models via the `mamba_ssm` package](https://huggingface.co/state-spaces) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Huggingface Optimum (Causal LMs) | ✔️ | `openvino` | Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenVINO™ Intermediate Representation (IR) format | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |
| Neuron via AWS Inf2 (Causal LMs) | ✔️ | `neuronx` | Any decoder-only AutoModelForCausalLM supported to run on [huggingface-ami image for inferentia2](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |
| [Neural Magic DeepSparse](https://github.com/neuralmagic/deepsparse) | ✔️ | `deepsparse` | Any LM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub with the "deepsparse" tag](https://huggingface.co/models?other=deepsparse) | `generate_until`, `loglikelihood` | ... |
| [Neural Magic SparseML](https://github.com/neuralmagic/sparseml) | ✔️ | `sparseml` | Any decoder-only AutoModelForCausalLM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub](https://huggingface.co/neuralmagic). Especially useful for models with quantization like [`zoo:llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized`](https://sparsezoo.neuralmagic.com/models/llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |
| Your local inference server! | :heavy_check_mark: | `local-completions` or `local-chat-completions` | Support for OpenAI API-compatible servers, with easy customization for other APIs. | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | | ... |
Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface).
> [!Note]
> For best performance with closed chat model APIs such as Anthropic Claude 3 and GPT-4, we recommend carefully looking at a few sample outputs using `--limit 10` first to confirm answer extraction and scoring on generative tasks is performing as expected. providing `system="<some system prompt here>"` within `--model_args` for anthropic-chat-completions, to instruct the model what format to respond in, may be useful.
### Other Frameworks
A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
To create your own custom integration you can follow instructions from [this tutorial](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md#external-library-usage).
### Additional Features
> [!Note]
> For tasks unsuitable for direct evaluation — either due risks associated with executing untrusted code or complexities in the evaluation process — the `--predict_only` flag is available to obtain decoded generations for post-hoc evaluation.
If you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing `--device cuda:0` with `--device mps` (requires PyTorch version 2.1 or higher). **Note that the PyTorch MPS backend is still in early stages of development, so correctness issues or unsupported operations may exist. If you observe oddities in model performance on the MPS back-end, we recommend first checking that a forward pass of your model on `--device cpu` and `--device mps` match.**
> [!Note]
> You can inspect what the LM inputs look like by running the following command:
> ```bash
> python write_out.py \
> --tasks <task1,task2,...> \
> --num_fewshot 5 \
> --num_examples 10 \
> --output_base_path /path/to/output/folder
> ```
> This will write out one text file for each task.
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:
```bash
lm_eval --model openai \
--model_args engine=davinci \
--tasks lambada_openai,hellaswag \
--check_integrity
```
## Advanced Usage Tips
For models loaded with the HuggingFace `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:
```bash
lm_eval --model hf \
--model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \
--tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
--device cuda:0
```
Models provided as delta weights can be easily loaded using the Hugging Face transformers library. Within --model_args, set the delta argument to specify the delta weights, and use the pretrained argument to designate the relative base model to which they will be applied:
```bash
lm_eval --model hf \
--model_args pretrained=Ejafa/llama_7B,delta=lmsys/vicuna-7b-delta-v1.1 \
--tasks hellaswag
```
[GPTQ](https://github.com/PanQiWei/AutoGPTQ) quantized models can be loaded by specifying their file names in `,autogptq=NAME` (or `,autogptq=True` for default names) in the `model_args` argument:
```bash
lm_eval --model hf \
--model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \
--tasks hellaswag
```
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.
## Saving Results
To save evaluation results provide an `--output_path`. We also support logging model responses with the `--log_samples` flag for post-hoc analysis.
Additionally, one can provide a directory with `--use_cache` to cache the results of prior runs. This allows you to avoid repeated execution of the same (model, task) pairs for re-scoring.
To push results and samples to the Hugging Face Hub, first ensure an access token with write access is set in the `HF_TOKEN` environment variable. Then, use the `--hf_hub_log_args` flag to specify the organization, repository name, repository visibility, and whether to push results and samples to the Hub - [example dataset on the HF Hub](https://huggingface.co/datasets/KonradSzafer/lm-eval-results-demo). For instance:
```bash
lm_eval --model hf \
--model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \
--tasks hellaswag \
--log_samples \
--output_path results \
--hf_hub_log_args hub_results_org=EleutherAI,hub_repo_name=lm-eval-results,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False \
```
This allows you to easily download the results and samples from the Hub, using:
```python
from datasets import load_dataset
load_dataset("EleutherAI/lm-eval-results-private", "hellaswag", "latest")
```
For a full list of supported arguments, check out the [interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md) guide in our documentation!
## Visualizing Results
You can seamlessly visualize and analyze the results of your evaluation harness runs using both Weights & Biases (W&B) and Zeno.
### Zeno
You can use [Zeno](https://zenoml.com) to visualize the results of your eval harness runs.
First, head to [hub.zenoml.com](https://hub.zenoml.com) to create an account and get an API key [on your account page](https://hub.zenoml.com/account).
Add this key as an environment variable:
```bash
export ZENO_API_KEY=[your api key]
```
You'll also need to install the `lm_eval[zeno]` package extra.
To visualize the results, run the eval harness with the `log_samples` and `output_path` flags.
We expect `output_path` to contain multiple folders that represent individual model names.
You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.
```bash
lm_eval \
--model hf \
--model_args pretrained=EleutherAI/gpt-j-6B \
--tasks hellaswag \
--device cuda:0 \
--batch_size 8 \
--log_samples \
--output_path output/gpt-j-6B
```
Then, you can upload the resulting data using the `zeno_visualize` script:
```bash
python scripts/zeno_visualize.py \
--data_path output \
--project_name "Eleuther Project"
```
This will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno.
If you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task.
You can find an example of this workflow in [examples/visualize-zeno.ipynb](examples/visualize-zeno.ipynb).
### Weights and Biases
With the [Weights and Biases](https://wandb.ai/site) integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform.
The integration provide functionalities
- to automatically log the evaluation results,
- log the samples as W&B Tables for easy visualization,
- log the `results.json` file as an artifact for version control,
- log the `<task_name>_eval_samples.json` file if the samples are logged,
- generate a comprehensive report for analysis and visualization with all the important metric,
- log task and cli specific configs,
- and more out of the box like the command used to run the evaluation, GPU/CPU counts, timestamp, etc.
First you'll need to install the lm_eval[wandb] package extra. Do `pip install lm_eval[wandb]`.
Authenticate your machine with an your unique W&B token. Visit https://wandb.ai/authorize to get one. Do `wandb login` in your command line terminal.
Run eval harness as usual with a `wandb_args` flag. Use this flag to provide arguments for initializing a wandb run ([wandb.init](https://docs.wandb.ai/ref/python/init)) as comma separated string arguments.
```bash
lm_eval \
--model hf \
--model_args pretrained=microsoft/phi-2,trust_remote_code=True \
--tasks hellaswag,mmlu_abstract_algebra \
--device cuda:0 \
--batch_size 8 \
--output_path output/phi-2 \
--limit 10 \
--wandb_args project=lm-eval-harness-integration \
--log_samples
```
In the stdout, you will find the link to the W&B run page as well as link to the generated report. You can find an example of this workflow in [examples/visualize-wandb.ipynb](examples/visualize-wandb.ipynb), and an example of how to integrate it beyond the CLI.
## How to Contribute or Learn More?
For more information on the library and how everything fits together, check out all of our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)! We plan to post a larger roadmap of desired + planned library improvements soon, with more information on how contributors can help.
### Implementing new tasks
To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).
In general, we follow this priority list for addressing concerns about prompting and other eval details:
1. If there is widespread agreement among people who train LLMs, use the agreed upon procedure.
2. If there is a clear and unambiguous official implementation, use that procedure.
3. If there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure.
4. If there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers.
These are guidelines and not rules, and can be overruled in special circumstances.
We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from [Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) as our original goal was specifically to compare results with that paper.
### Support
The best way to get support is to open an issue on this repo or join the [EleutherAI Discord server](https://discord.gg/eleutherai). The `#lm-thunderdome` channel is dedicated to developing this project and the `#release-discussion` channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you!
## Optional Extras
Extras dependencies can be installed via `pip install -e ".[NAME]"`
| Name | Use |
|-----------------|----------------------------------------------|
| api | For using api models (Anthropic, OpenAI API) |
| deepsparse | For running NM's DeepSparse models |
| dev | For linting PRs and contributions |
| gptq | For loading models with GPTQ |
| hf_transfer | For speeding up HF Hub file downloads |
| ifeval | For running the IFEval task |
| neuronx | For running on AWS inf2 instances |
| mamba | For loading Mamba SSM models |
| math | For running math task answer checking |
| multilingual | For multilingual tokenizers |
| optimum | For running Intel OpenVINO models |
| promptsource | For using PromptSource prompts |
| sentencepiece | For using the sentencepiece tokenizer |
| sparseml | For using NM's SparseML models |
| testing | For running library test suite |
| vllm | For loading models with vLLM |
| zeno | For visualizing results with Zeno |
| --------------- | --------------------------------------- |
| all | Loads all extras (not recommended) |
## Cite as
```
@misc{eval-harness,
author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
title = {A framework for few-shot language model evaluation},
month = 07,
year = 2024,
publisher = {Zenodo},
version = {v0.4.3},
doi = {10.5281/zenodo.12608602},
url = {https://zenodo.org/records/12608602}
}
``` | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 39773
} |
# TemplateAPI Usage Guide
The `TemplateAPI` class is a versatile superclass designed to facilitate the integration of various API-based language models into the lm-evaluation-harness framework. This guide will explain how to use and extend the `TemplateAPI` class to implement your own API models. If your API implements the OpenAI API you can use the `local-completions` or the `local-chat-completions` (defined [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/openai_completions.py)) model types, which can also serve as examples of how to effectively subclass this template.
## Overview
The `TemplateAPI` class provides a template for creating API-based model implementations. It handles common functionalities such as:
- Tokenization (optional)
- Batch processing
- Caching
- Retrying failed requests
- Parsing API responses
To use this class, you typically need to subclass it and implement specific methods for your API.
## Key Methods to Implement
When subclassing `TemplateAPI`, you need to implement the following methods:
1. `_create_payload`: Creates the JSON payload for API requests.
2. `parse_logprobs`: Parses log probabilities from API responses.
3. `parse_generations`: Parses generated text from API responses.
4. `headers`: Returns the headers for the API request.
You may also need to override other methods or properties depending on your API's specific requirements.
> [!NOTE]
> Currently loglikelihood and MCQ based tasks (such as MMLU) are only supported for completion endpoints. Not for chat-completion — those that expect a list of dicts — endpoints! Completion APIs which support instruct tuned models can be evaluated with the `--apply_chat_template` option in order to simultaneously evaluate models using a chat template format while still being able to access the model logits needed for loglikelihood-based tasks.
# TemplateAPI Usage Guide
## TemplateAPI Arguments
When initializing a `TemplateAPI` instance or a subclass, you can provide several arguments to customize its behavior. Here's a detailed explanation of some important arguments:
- `model` or `pretrained` (str):
- The name or identifier of the model to use.
- `model` takes precedence over `pretrained` when both are provided.
- `base_url` (str):
- The base URL for the API endpoint.
- `tokenizer` (str, optional):
- The name or path of the tokenizer to use.
- If not provided, it defaults to using the same tokenizer name as the model.
- `num_concurrent` (int):
- Number of concurrent requests to make to the API.
- Useful for APIs that support parallel processing.
- Default is 1 (sequential processing).
- `tokenized_requests` (bool):
- Determines whether the input is pre-tokenized. Defaults to `True`.
- Requests can be sent in either tokenized form (`list[list[int]]`) or as text (`list[str]`, or `str` for batch_size=1).
- For loglikelihood-based tasks, prompts require tokenization to calculate the context length. If `False` prompts are decoded back to text before being sent to the API.
- Not as important for `generate_until` tasks.
- Ignored for chat formatted inputs (list[dict...]) or if tokenizer_backend is None.
- `tokenizer_backend` (str, optional):
- Required for loglikelihood-based or MCQ tasks.
- Specifies the tokenizer library to use. Options are "tiktoken", "huggingface", or None.
- Default is "huggingface".
- `max_length` (int, optional):
- Maximum length of input + output.
- Default is 2048.
- `max_retries` (int, optional):
- Maximum number of retries for failed API requests.
- Default is 3.
- `max_gen_toks` (int, optional):
- Maximum number of tokens to generate in completion tasks.
- Default is 256 or set in task yaml.
- `batch_size` (int or str, optional):
- Number of requests to batch together (if the API supports batching).
- Can be an integer or "auto" (which defaults to 1 for API models).
- Default is 1.
- `seed` (int, optional):
- Random seed for reproducibility.
- Default is 1234.
- `add_bos_token` (bool, optional):
- Whether to add the beginning-of-sequence token to inputs (when tokenizing).
- Default is False.
- `custom_prefix_token_id` (int, optional):
- Custom token ID to use as a prefix for inputs.
- If not provided, uses the model's default BOS or EOS token (if `add_bos_token` is True).
Example usage:
```python
class MyAPIModel(TemplateAPI):
def __init__(self, **kwargs):
super().__init__(
model="my-model",
base_url="https://api.mymodel.com/v1/completions",
tokenizer_backend="huggingface",
num_concurrent=5,
max_retries=5,
batch_size=10,
**kwargs
)
# Implement other required methods...
```
When subclassing `TemplateAPI`, you can override these arguments in your `__init__` method to set default values specific to your API. You can also add additional (potentially user-specified) arguments as needed for your specific implementation.
## Example Implementation: OpenAI API
The `OpenAICompletionsAPI` and `OpenAIChatCompletion` ([here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/models/openai_completions.py) classes demonstrate how to implement API models using the `TemplateAPI` class. Here's a breakdown of the key components:
### 1. Subclassing and Initialization
```python
@register_model("openai-completions")
class OpenAICompletionsAPI(LocalCompletionsAPI):
def __init__(
self,
base_url="https://api.openai.com/v1/completions",
tokenizer_backend="tiktoken",
**kwargs,
):
super().__init__(
base_url=base_url, tokenizer_backend=tokenizer_backend, **kwargs
)
```
### 2. Implementing API Key Retrieval
```python
@cached_property
def api_key(self):
key = os.environ.get("OPENAI_API_KEY", None)
if key is None:
raise ValueError(
"API key not found. Please set the OPENAI_API_KEY environment variable."
)
return key
```
### 3. Creating the Payload
```python
def _create_payload(
self,
messages: Union[List[List[int]], List[dict], List[str], str],
generate=False,
gen_kwargs: Optional[dict] = None,
**kwargs,
) -> dict:
if generate:
# ... (implementation for generation)
else:
# ... (implementation for log likelihood)
```
### 4. Parsing API Responses
```python
@staticmethod
def parse_logprobs(
outputs: Union[Dict, List[Dict]],
tokens: List[List[int]] = None,
ctxlens: List[int] = None,
**kwargs,
) -> List[Tuple[float, bool]]:
# ... (implementation)
@staticmethod
def parse_generations(outputs: Union[Dict, List[Dict]], **kwargs) -> List[str]:
# ... (implementation)
```
The requests are initiated in the `model_call` or the `amodel_call` methods.
## Implementing Your Own API Model
To implement your own API model:
1. Subclass `TemplateAPI` or one of its subclasses (e.g., `LocalCompletionsAPI`).
2. Override the `__init__` method if you need to set specific parameters.
3. Implement the `_create_payload` and `header` methods to create the appropriate payload for your API.
4. Implement the `parse_logprobs` and `parse_generations` methods to parse your API's responses.
5. Override the `api_key` property if your API requires authentication.
6. Override any other methods as necessary to match your API's behavior.
## Best Practices
1. Use the `@register_model` decorator to register your model with the framework (and import it in `lm_eval/models/__init__.py`!).
3. Use environment variables for sensitive information like API keys.
4. Properly handle batching and concurrent requests if supported by your API. | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/API_guide.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/API_guide.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 7742
} |
# Contributing to LM Evaluation Harness
Welcome and thank you for your interest in the LM Evaluation Harness! We welcome contributions and feedback and appreciate your time spent with our library, and hope you find it useful!
## Important Resources
There are several places information about LM Evaluation Harness is located:
- Our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)
- We occasionally use [GitHub Milestones](https://github.com/EleutherAI/lm-evaluation-harness/milestones) to track progress toward specific near-term version releases.
- We maintain a [Project Board](https://github.com/orgs/EleutherAI/projects/25) for tracking current work items and PRs, and for future roadmap items or feature requests.
- Further discussion and support conversations are located in the #lm-thunderdome channel of the [EleutherAI discord](https://discord.gg/eleutherai).
## Code Style
LM Evaluation Harness uses [ruff](https://github.com/astral-sh/ruff) for linting via [pre-commit](https://pre-commit.com/).
You can install linters and dev tools via
```pip install lm_eval[dev]``` or ```pip install -e ".[dev]"```
Then, run
```pre-commit install```
in order to ensure linters and other checks will be run upon committing.
## Testing
We use [pytest](https://docs.pytest.org/en/latest/) for running unit tests. All library unit tests can be run via:
```
python -m pytest --showlocals -s -vv -n=auto --ignore=tests/models/test_neuralmagic.py --ignore=tests/models/test_openvino.py
```
## Contributor License Agreement
We ask that new contributors agree to a Contributor License Agreement affirming that EleutherAI has the rights to use your contribution to our library.
First-time pull requests will have a reply added by @CLAassistant containing instructions for how to confirm this, and we require it before merging your PR.
## Contribution Best Practices
We recommend a few best practices to make your contributions or reported errors easier to assist with.
**For Pull Requests:**
- PRs should be titled descriptively, and be opened with a brief description of the scope and intent of the new contribution.
- New features should have appropriate documentation added alongside them.
- Aim for code maintainability, and minimize code copying.
- If opening a task, try to share test results on the task using a publicly-available model, and if any public results are available on the task, compare to them.
**For Feature Requests:**
- Provide a short paragraph's worth of description. What is the feature you are requesting? What is its motivation, and an example use case of it? How does this differ from what is currently supported?
**For Bug Reports**:
- Provide a short description of the bug.
- Provide a *reproducible example*--what is the command you run with our library that results in this error? Have you tried any other steps to resolve it?
- Provide a *full error traceback* of the error that occurs, if applicable. A one-line error message or small screenshot snippet is unhelpful without the surrounding context.
- Note what version of the codebase you are using, and any specifics of your environment and setup that may be relevant.
**For Requesting New Tasks**:
- Provide a 1-2 sentence description of what the task is and what it evaluates.
- Provide a link to the paper introducing the task.
- Provide a link to where the dataset can be found.
- Provide a link to a paper containing results on an open-source model on the task, for use in comparisons and implementation validation.
- If applicable, link to any codebase that has implemented the task (especially the original publication's codebase, if existent).
## How Can I Get Involved?
To quickly get started, we maintain a list of good first issues, which can be found [on our project board](https://github.com/orgs/EleutherAI/projects/25/views/8) or by [filtering GH Issues](https://github.com/EleutherAI/lm-evaluation-harness/issues?q=is%3Aopen+label%3A%22good+first+issue%22+label%3A%22help+wanted%22). These are typically smaller code changes or self-contained features which can be added without extensive familiarity with library internals, and we recommend new contributors consider taking a stab at one of these first if they are feeling uncertain where to begin.
There are a number of distinct ways to contribute to LM Evaluation Harness, and all are extremely helpful! A sampling of ways to contribute include:
- **Implementing and verifying new evaluation tasks**: Is there a task you'd like to see LM Evaluation Harness support? Consider opening an issue requesting it, or helping add it! Verifying and cross-checking task implementations with their original versions is also a very valuable form of assistance in ensuring standardized evaluation.
- **Improving documentation** - Improvements to the documentation, or noting pain points / gaps in documentation, are helpful in order for us to improve the user experience of the library and clarity + coverage of documentation.
- **Testing and devops** - We are very grateful for any assistance in adding tests for the library that can be run for new PRs, and other devops workflows.
- **Adding new modeling / inference library integrations** - We hope to support a broad range of commonly-used inference libraries popular among the community, and welcome PRs for new integrations, so long as they are documented properly and maintainable.
- **Proposing or Contributing New Features** - We want LM Evaluation Harness to support a broad range of evaluation usecases. If you have a feature that is not currently supported but desired, feel free to open an issue describing the feature and, if applicable, how you intend to implement it. We would be happy to give feedback on the cleanest way to implement new functionalities and are happy to coordinate with interested contributors via GH discussions or via discord.
We hope that this has been helpful, and appreciate your interest in contributing! Further questions can be directed to [our Discord](discord.gg/eleutherai). | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/CONTRIBUTING.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/CONTRIBUTING.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 6071
} |
# Eval Harness Documentation
Welcome to the docs for the LM Evaluation Harness!
## Table of Contents
* To learn about the public interface of the library, as well as how to evaluate via the command line or as integrated into an external library, see the [Interface](./interface.md).
* To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](./model_guide.md).
* For an extended description of how to extend the library to new model classes served over an API, see the [API Guide](./API_guide.md).
* For a crash course on adding new tasks to the library, see our [New Task Guide](./new_task_guide.md).
* To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Task Configuration Guide](./task_guide.md). | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 858
} |
# Decontamination
## Usage
The provided directory should contain
the ngram files and info.json produced in "Pile Ngram Generation" further down.
```bash
python -m lm_eval \
--model gpt2 \
--device 0 \
--tasks sciq
```
## Background
Downstream evaluations test model generalization, and are less useful when test set data also exists in the training set, referred to as leakage or contamination.
Filtering your training set against the test set is a good first step, however this isn't always possible, as in the case of a new benchmark or one that wasn't considered prior to model training. When training set filtering isn't possible, it is useful to measure the impact of test set leakage by detecting the contaminated test examples and producing a clean version of the benchmark.
The basis for our decontamination procedure can be found in Appendix C of "Language Models are Few-Shot Learners". OpenAI defined a test document as contaminated if any N-gram overlap existed with any training document. They used a range of N values between 8 and 13 depending on dataset, while we just used 13 for simplicity.
## Implementation
Contamination detection can be found in `lm_eval/decontaminate.py` with supporting code in `lm_eval/decontamination/`.
decontaminate.py does the following:
1. Build dictionaries of all ngrams and their corresponding evaluation/document ids.
2. Scan through sorted files containing training set n-grams.
3. If a match is found, the corresponding evaluation/document combinations are marked as contaminated.
`lm_eval/evaluator.py` can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a "decontaminate" suffix.
This is disabled by default for new tasks, to support decontamination on a task override the "should_decontaminate" and "doc_to_decontamination_query" methods. For more details see the [task guide](task_guide.md).
## Pile Ngram Generation
The relevant scripts can be found in `scripts/clean_training_data`, which also import from
`lm_eval/decontamination/`
1. git clone https://github.com/EleutherAI/lm-evaluation-harness.git
2. pip install -r requirements.txt
3. Download The Pile from [The Eye](https://the-eye.eu/public/AI/pile/train/)
4. Place pile files in "pile" directory under "lm-evaluation-harness" (or create a symlink)
5. Run generate_13_grams.
```bash
export PYTHONHASHSEED=0
python -m scripts/clean_training_data/generate_13_grams \
-dir path/to/working/directory \
-n 13 \
-buckets 500
```
Took approximately 4 days for us. We had the time to wait, but this could be scaled out by doing partial pile scans on multiple instances of this script and merging the relevant buckets. We fixed PYTHONHASHSEED to ensure reproducibility of bucket hashing in case you need to stop and start.
6. Sort the generated 13-grams.
```bash
python -m scripts/clean_training_data/sort_13_gram_buckets \
-dir path/to/working/directory/output
```
Took approximately 5 days for us. You could speed this up by spreading the files around to different machines and running the sort script before gathering them together.
7. Compress the sorted 13 grams files and place them together with info.json.
This step only takes a few hours.
```bash
python -m scripts/clean_training_data/compress_and_package \
-dir path/to/working/directory \
-output path/to/final/directory \
-procs 8
``` | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/decontamination.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/decontamination.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 3500
} |
Subsets and Splits