Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 5,486 Bytes
11e0161
7d54d1e
11e0161
 
 
 
636b18e
 
11e0161
 
 
 
 
 
636b18e
 
 
 
 
 
11e0161
 
 
 
636b18e
 
 
 
 
11e0161
 
 
 
 
636b18e
 
11e0161
dc9712e
9cf08db
dc9712e
dad20ec
9cf08db
 
dc9712e
 
367167f
dc9712e
 
 
 
cc440c0
dc9712e
 
 
 
 
 
 
 
 
 
 
 
0487753
dc9712e
 
 
 
 
 
 
 
 
 
 
 
 
 
dad20ec
dc9712e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0487753
dc9712e
 
 
 
 
 
 
 
 
 
 
 
 
 
803c637
dc9712e
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
license: cc-by-nc-4.0
dataset_info:
  features:
  - name: id
    dtype: string
  - name: images
    sequence: string
  - name: metadata
    struct:
    - name: dataset
      dtype: string
    - name: task_instruction
      dtype: string
  - name: conversation
    list:
    - name: content
      dtype: string
    - name: role
      dtype: string
  splits:
  - name: cota_293k
    num_bytes: 684640621
    num_examples: 293105
  - name: cota_815k
    num_bytes: 1643764353
    num_examples: 815582
  download_size: 327551290
  dataset_size: 2328404974
configs:
- config_name: default
  data_files:
  - split: cota_293k
    path: data/cota_293k-*
  - split: cota_815k
    path: data/cota_815k-*
---

# ๐ŸŒฎ TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action

<h3 align="left"> <a href="https://taco-project.github.io/">๐ŸŒ Website</a> | <a href="https://arxiv.org/pdf/2412.05479">๐Ÿ“‘ Arxiv</a> | <a href="https://github.com/SalesforceAIResearch/CoTA">๐Ÿ’ป Code</a>| <a href="https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4">๐Ÿค— Datasets</a> 
    
<h5 align="left"> If you like our project or are interested in its updates, please star us :) Thank you! โญ </h2>

## Summary
TLDR: CoTA is a large-scale dataset of synthetic Chains-of-Thought-and-Action (CoTA) generated by multi-modal large language models.

## Load data
```
from datasets import load_dataset
dataset = load_dataset("Salesforce/cota-mantis", split="cota_293k")
```

## Dataset Card

### Dataset Details

This dataset contains synthetic chains of thoughts and actions involving 15 actions๏ผš```OCR```, ```LocalizeObjects```, ```GetObjects```, 
```EstimateRegionDepth```, ```EstimateObjectDepth```, ```Crop```, ```ZoomIn```, ```QueryLanguageModel```, ```GetImageToImagesSimilarity```, ```GetImageToTextsSimilarity```, 
```GetTextToImagesSimilarity```, ```DetectFaces```, ```QueryKnowledgeBase```, ```Calculate```, and ```SolveMathEquation```. Additionally, the ```Terminate``` action
is added for the model to provide a final answer. You can find the detailed statistics of this dataset, 
including the data sources distribution, the average and max number of images and turns below:

<img src="dataset_stats.png" alt="dataset stats" width="800"/>

<!-- ### Dataset Sources
- **Cauldron:** 
- **Mantis-Instruct:** 
 -->
### Uses

<!-- Address questions around how the dataset is intended to be used. -->
The intended use of this dataset is to finetune multi-modal language models to produce chains of thoughts and actions to answer difficult and complex visual questions.

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

You can directly use this dataset to train Mantis-based models with our [codebase](https://github.com/SalesforceAIResearch/TACO). To train LLaVA-OneVision models, please use ```cota-llava``` in the [collection](https://huggingface.co/collections/Salesforce/cota-datasets-675333e57dd34a4adc5f3ff4).
To train other multi-modal language models, you might need to adapt the conversation format to work for your particular models. 

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

This dataset should not be used for testing models. 

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
The source data comes from [Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron) and [Mantis-Instruct](https://huggingface.co/datasets/TIGER-Lab/Mantis-Instruct). 
They are collected from various existing datasets, including COCO, AOKVQA, ScienceQA, Visual Genome, etc. 

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

<img src="data_gen.png" width=1000>
<!-- ![Dataset generation](dataset_gen.png "Dataset generation process") -->


## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

Our dataset has the following limitations: 
- The chains of thoughts and actions are generated by gpt-4o-2024-08-06 and thus inherit its biases;
- The actions are somewhat limited as they cover mostly vision-centric tools such as DepthEstimation and some generic tools such as QueryKnowledgeBase.
- Please refer to the paper for additional limitations.

## License

The CoTA datasets are licensed under the noncommerical license [CC-BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data. This release is for research purposes only in support of an academic paper.

## Citation
```
@misc{ma2024tacolearningmultimodalaction,
      title={TACO: Learning Multi-modal Action Models with Synthetic Chains-of-Thought-and-Action}, 
      author={Zixian Ma and Jianguo Zhang and Zhiwei Liu and Jieyu Zhang and Juntao Tan and Manli Shu and Juan Carlos Niebles and Shelby Heinecke and Huan Wang and Caiming Xiong and Ranjay Krishna and Silvio Savarese},
      year={2024},
      eprint={2412.05479},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.05479}, 
}
```