Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
zixianma commited on
Commit
dad20ec
ยท
verified ยท
1 Parent(s): 367167f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -37,7 +37,7 @@ configs:
37
 
38
  # ๐ŸŒฎ TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action
39
 
40
- <h3 align="left"> <a href="https://taco-project.github.io/">๐ŸŒ Website</a> | <a href="https://arxiv.org/pdf/2412.05479">๐Ÿ“‘ Arxiv</a> | <a href="https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4">๐Ÿค— Datasets</a>
41
 
42
  <h5 align="left"> If you like our project or are interested in its updates, please star us :) Thank you! โญ </h2>
43
 
@@ -75,7 +75,7 @@ The intended use of this dataset is to finetune multi-modal language models to p
75
 
76
  <!-- This section describes suitable use cases for the dataset. -->
77
 
78
- You can directly use this dataset to train multi-modal language models with the Mantis codebase. To train LLaVA-OneVision models, please use [cota-llava](https://huggingface.co/collections/Salesforce/taco-models-and-datasets-675333e57dd34a4adc5f3ff4) in the [collection](https://huggingface.co/collections/Salesforce/taco-models-and-datasets-675333e57dd34a4adc5f3ff4).
79
  To train other multi-modal language models, you might need to adapt the conversation format to work for your particular models.
80
 
81
  ### Out-of-Scope Use
 
37
 
38
  # ๐ŸŒฎ TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action
39
 
40
+ <h3 align="left"> <a href="https://taco-project.github.io/">๐ŸŒ Website</a> | <a href="https://arxiv.org/pdf/2412.05479">๐Ÿ“‘ Arxiv</a> | <a href="https://github.com/SalesforceAIResearch/CoTA">๐Ÿ’ป Code</a>| <a href="https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4">๐Ÿค— Datasets</a>
41
 
42
  <h5 align="left"> If you like our project or are interested in its updates, please star us :) Thank you! โญ </h2>
43
 
 
75
 
76
  <!-- This section describes suitable use cases for the dataset. -->
77
 
78
+ You can directly use this dataset to train Mantis-based models with our [codebase](https://github.com/SalesforceAIResearch/TACO). To train LLaVA-OneVision models, please use ```cota-llava``` in the [collection](https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4).
79
  To train other multi-modal language models, you might need to adapt the conversation format to work for your particular models.
80
 
81
  ### Out-of-Scope Use