--- dataset_info: features: - name: id dtype: string - name: conversation list: - name: content dtype: string - name: role dtype: string - name: metadata struct: - name: dataset dtype: string - name: task_instruction dtype: string - name: images sequence: string splits: - name: cota_293k num_bytes: 684640621 num_examples: 293105 download_size: 107061603 dataset_size: 684640621 configs: - config_name: default data_files: - split: cota_293k path: data/cota_293k-* --- # TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action [Paper](https://arxiv.org/pdf/2412.05479) | [Website](https://taco-project.github.io/) | [Code](https://github.com/SalesforceAIResearch/TACO) | [Datasets](https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4) ## Summary TLDR: CoTA is a large-scale dataset of synthetic Chains-of-Thought-and-Action (CoTA) generated by multi-modal large language models or programs. ## Load data ``` from datasets import load_dataset dataset = load_dataset("agentstudio-family/cota-mantis", split="cota_293k") ``` ## Dataset Card ### Dataset Details This dataset contains synthetic chains of thoughts and actions involving 15 actions:```OCR```, ```LocalizeObjects```, ```GetObjects```, ```EstimateRegionDepth```, ```EstimateObjectDepth```, ```Crop```, ```ZoomIn```, ```QueryLanguageModel```, ```GetImageToImagesSimilarity```, ```GetImageToTextsSimilarity```, ```GetTextToImagesSimilarity```, ```DetectFaces```, ```QueryKnowledgeBase```, ```Calculate```, and ```SolveMathEquation```. Additionally, the ```Terminate``` action is added for the model to provide a final answer. You can find the detailed statistics of this dataset, including the data sources distribution, the average and max number of images and turns below: dataset stats ### Uses The intended use of this dataset is to finetune multi-modal language models to produce chains of thoughts and actions to answer difficult and complex visual questions. ### Direct Use You can directly use this dataset to train multi-modal language models with the Mantis codebase. To train LLaVA-OneVision models, please use [cota-llava](https://huggingface.co/collections/Salesforce/taco-models-and-datasets-675333e57dd34a4adc5f3ff4) in the [collection](https://huggingface.co/collections/Salesforce/taco-models-and-datasets-675333e57dd34a4adc5f3ff4). To train other multi-modal language models, you might need to adapt the conversation format to work for your particular models. ### Out-of-Scope Use This dataset should not be used for testing models. ### Source Data The source data comes from [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Mantis-Instruct](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct). They are collected from various existing datasets, including COCO, AOKVQA, ScienceQA, Visual Genome, etc. #### Data Collection and Processing ## Bias, Risks, and Limitations Our dataset has the following limitations: - The chains of thoughts and actions are generated by gpt-4o-2024-08-06 and thus inherit its biases; - The actions are somewhat limited as they cover mostly vision-centric tools such as DepthEstimation and some generic tools such as QueryKnowledgeBase. - Please refer to the paper for additional limitations. ## License The CoTA datasets are licensed under the noncommerical license [CC-BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data. This release is for research purposes only in support of an academic paper. ## Citation ``` @misc{ma2024tacolearningmultimodalaction, title={TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action}, author={Zixian Ma and Jianguo Zhang and Zhiwei Liu and Jieyu Zhang and Juntao Tan and Manli Shu and Juan Carlos Niebles and Shelby Heinecke and Huan Wang and Caiming Xiong and Ranjay Krishna and Silvio Savarese}, year={2024}, eprint={2412.05479}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2412.05479}, } ```