Update README.md
Browse files
README.md
CHANGED
@@ -13,19 +13,59 @@ task_categories:
|
|
13 |
tags:
|
14 |
- tactile
|
15 |
- robotics
|
16 |
-
pretty_name: Sensor-Invariant Tactile
|
17 |
size_categories:
|
18 |
- 1M<n<10M
|
19 |
---
|
20 |
-
# SITR Dataset
|
|
|
|
|
21 |
|
22 |
-
This repository hosts the dataset for the Sensor-Invariant Tactile Representation (SITR) paper. The dataset supports training and evaluating models for sensor-invariant tactile representations across simulated and real-world settings.
|
23 |
The codebase implementing SITR is available on GitHub: [SITR Codebase](https://github.com/hgupt3/gsrl)
|
24 |
|
25 |
For more details on the underlying methods and experiments, please visit our [project website](https://hgupt3.github.io/sitr/) and read the [arXiv paper](https://arxiv.org/abs/2502.19638).
|
26 |
|
27 |
---
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
## Dataset Overview
|
30 |
|
31 |
The SITR dataset consists of three main parts:
|
@@ -34,10 +74,10 @@ The SITR dataset consists of three main parts:
|
|
34 |
A large-scale synthetic dataset generated using physics-based rendering (PBR) in Blender. This dataset spans 100 unique simulated sensor configurations with tactile signals, calibration images, and corresponding surface normal maps. It includes 10K unique contact configurations generated using 50 high-resolution 3D meshes of common household objects, resulting in a pre-training dataset of 1M samples.
|
35 |
|
36 |
2. **Classification Tactile Dataset**
|
37 |
-
Data collected from 7 real sensors (including variations of GelSight Mini, GelSight Hex, GelSight Wedge, and DIGIT). For the classification task, 20 objects are pressed against each sensor at various poses and depths, accumulating 1K tactile images per object (140K images in total, with 20K per sensor). We
|
38 |
|
39 |
3. **Pose Estimation Tactile Dataset**
|
40 |
-
For pose estimation, tactile signals are recorded using a modified Ender-3 Pro 3D printer equipped with 3D-printed indenters. This setup provides accurate ground truth (x, y, z coordinates) for contact points. Data were collected for 6 indenters across 4 sensors, resulting in 1K samples per
|
41 |
|
42 |
---
|
43 |
|
@@ -50,65 +90,55 @@ The simulated dataset is split into two parts due to its size:
|
|
50 |
- `renders_part_aa.zip`
|
51 |
- `renders_part_ab.zip`
|
52 |
|
53 |
-
|
54 |
|
55 |
-
|
56 |
-
|
57 |
-
|
|
|
58 |
|
59 |
**To merge and unzip:**
|
60 |
|
61 |
1. **Merge the parts into a single zip file:**
|
62 |
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
You can remove the old binaries
|
68 |
-
|
69 |
-
```bash
|
70 |
-
rm renders_part_aa renders_part_ab
|
71 |
-
```
|
72 |
|
73 |
-
|
74 |
|
75 |
-
|
76 |
-
|
77 |
-
|
|
|
78 |
|
79 |
### Real-World Datasets (Classification & Pose Estimation)
|
80 |
|
81 |
-
|
82 |
-
|
83 |
-
```bash
|
84 |
-
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/classification_dataset.zip
|
85 |
-
```
|
86 |
-
|
87 |
-
and the pose estimation datset with
|
88 |
-
|
89 |
-
```bash
|
90 |
-
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/pose_dataset.zip
|
91 |
-
```
|
92 |
-
|
93 |
-
Simply unzip them in your desired directory:
|
94 |
|
95 |
```bash
|
|
|
96 |
unzip classification_dataset.zip -d your_desired_directory
|
97 |
-
|
98 |
```
|
99 |
|
100 |
-
|
101 |
|
102 |
-
|
103 |
-
|
|
|
|
|
|
|
104 |
|
|
|
|
|
|
|
105 |
|
106 |
---
|
107 |
|
108 |
## File Structure
|
109 |
|
110 |
-
Below are examples of the directory trees for each dataset type.
|
111 |
-
|
112 |
### 1. Simulated Tactile Dataset
|
113 |
|
114 |
```
|
@@ -154,7 +184,7 @@ train_set/ (or test_set/)
|
|
154 |
|
155 |
### 3. Pose Estimation Dataset
|
156 |
|
157 |
-
|
158 |
|
159 |
```
|
160 |
train_set/ (or test_set/)
|
@@ -176,17 +206,17 @@ train_set/ (or test_set/)
|
|
176 |
|
177 |
## Citation
|
178 |
|
179 |
-
If you use this dataset in your research, please cite:
|
180 |
|
181 |
```bibtex
|
182 |
@misc{gupta2025sensorinvarianttactilerepresentation,
|
183 |
-
title={Sensor-Invariant Tactile Representation},
|
184 |
author={Harsh Gupta and Yuchen Mo and Shengmiao Jin and Wenzhen Yuan},
|
185 |
year={2025},
|
186 |
eprint={2502.19638},
|
187 |
archivePrefix={arXiv},
|
188 |
primaryClass={cs.RO},
|
189 |
-
url={https://arxiv.org/abs/2502.19638}
|
190 |
}
|
191 |
```
|
192 |
|
@@ -194,6 +224,6 @@ If you use this dataset in your research, please cite:
|
|
194 |
|
195 |
## License
|
196 |
|
197 |
-
This dataset
|
198 |
|
199 |
If you have any questions or need further clarification, please feel free to reach out.
|
|
|
13 |
tags:
|
14 |
- tactile
|
15 |
- robotics
|
16 |
+
pretty_name: Sensor-Invariant Tactile Representation
|
17 |
size_categories:
|
18 |
- 1M<n<10M
|
19 |
---
|
20 |
+
# SITR Dataset & Weights
|
21 |
+
|
22 |
+
This repository hosts both the dataset and pre-trained model weights for the Sensor-Invariant Tactile Representation (SITR) paper. The dataset supports training and evaluating models for sensor-invariant tactile representations across simulated and real-world settings, while the pre-trained weights enable immediate deployment and fine-tuning for various tactile perception tasks.
|
23 |
|
|
|
24 |
The codebase implementing SITR is available on GitHub: [SITR Codebase](https://github.com/hgupt3/gsrl)
|
25 |
|
26 |
For more details on the underlying methods and experiments, please visit our [project website](https://hgupt3.github.io/sitr/) and read the [arXiv paper](https://arxiv.org/abs/2502.19638).
|
27 |
|
28 |
---
|
29 |
|
30 |
+
## Pre-trained Model Weights
|
31 |
+
|
32 |
+
The pre-trained model weights are available for immediate use in inference or fine-tuning. These weights were trained on our large-scale simulated dataset and have been validated across multiple real-world sensors.
|
33 |
+
|
34 |
+
### Downloading the Weights
|
35 |
+
|
36 |
+
```bash
|
37 |
+
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/checkpoints.zip
|
38 |
+
unzip checkpoints.zip -d your_desired_directory
|
39 |
+
```
|
40 |
+
|
41 |
+
### Weights Directory Structure
|
42 |
+
|
43 |
+
The weights directory contains the following structure:
|
44 |
+
|
45 |
+
```
|
46 |
+
checkpoints/
|
47 |
+
βββ SITR_B18.pth # Base pre-trained model weights (371MB)
|
48 |
+
βββ classification/ # Classification task weights
|
49 |
+
β βββ SITR_base/ # Base model with fine-tuned head for classification on 1 sensor
|
50 |
+
β βββ sensor_0000.pth # Weights for sensor 0
|
51 |
+
β βββ sensor_0001.pth # Weights for sensor 1
|
52 |
+
β βββ ...
|
53 |
+
βββ pose_estimation/ # Pose estimation task weights
|
54 |
+
β βββ SITR_base/ # Base model with fine-tuned head for classification on 1 sensor
|
55 |
+
β βββ sensor_0000.pth # Weights for sensor 0
|
56 |
+
β βββ sensor_0001.pth # Weights for sensor 1
|
57 |
+
β βββ ...
|
58 |
+
```
|
59 |
+
|
60 |
+
You can use the SITR_B18.pth weight for:
|
61 |
+
1. **Zero-shot inference** on new tactile data
|
62 |
+
2. **Fine-tuning** for specific tasks
|
63 |
+
3. **Feature extraction** for downstream applications
|
64 |
+
|
65 |
+
For detailed usage instructions and examples, please refer to the [SITR Codebase](https://github.com/hgupt3/gsrl).
|
66 |
+
|
67 |
+
---
|
68 |
+
|
69 |
## Dataset Overview
|
70 |
|
71 |
The SITR dataset consists of three main parts:
|
|
|
74 |
A large-scale synthetic dataset generated using physics-based rendering (PBR) in Blender. This dataset spans 100 unique simulated sensor configurations with tactile signals, calibration images, and corresponding surface normal maps. It includes 10K unique contact configurations generated using 50 high-resolution 3D meshes of common household objects, resulting in a pre-training dataset of 1M samples.
|
75 |
|
76 |
2. **Classification Tactile Dataset**
|
77 |
+
Data collected from 7 real sensors (including variations of GelSight Mini, GelSight Hex, GelSight Wedge, and DIGIT). For the classification task, 20 objects are pressed against each sensor at various poses and depths, accumulating 1K tactile images per object (140K images in total, with 20K per sensor). We used 16 objects for our classification experiments, as some items were deemed unsuitable (this was decided before experimentation). The dataset is provided as separate train (80%) and test sets (20%).
|
78 |
|
79 |
3. **Pose Estimation Tactile Dataset**
|
80 |
+
For pose estimation, tactile signals are recorded using a modified Ender-3 Pro 3D printer equipped with 3D-printed indenters. This setup provides accurate ground truth (x, y, z coordinates) for contact points, where all coordinates are specified in millimeters. Data were collected for 6 indenters across 4 sensors, resulting in 1K samples per indenter (24K images in total, 6K per sensor). This dataset is also organized into train (80%) and test sets (20%).
|
81 |
|
82 |
---
|
83 |
|
|
|
90 |
- `renders_part_aa.zip`
|
91 |
- `renders_part_ab.zip`
|
92 |
|
93 |
+
Download both files using:
|
94 |
|
95 |
+
```bash
|
96 |
+
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/renders_part_aa
|
97 |
+
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/renders_part_ab
|
98 |
+
```
|
99 |
|
100 |
**To merge and unzip:**
|
101 |
|
102 |
1. **Merge the parts into a single zip file:**
|
103 |
|
104 |
+
```bash
|
105 |
+
cat renders_part_aa renders_part_ab > renders.zip
|
106 |
+
rm renders_part_aa renders_part_ab # Remove the split files
|
107 |
+
```
|
|
|
|
|
|
|
|
|
|
|
108 |
|
109 |
+
2. **Unzip the merged file:**
|
110 |
|
111 |
+
```bash
|
112 |
+
unzip renders.zip -d your_desired_directory
|
113 |
+
rm renders.zip
|
114 |
+
```
|
115 |
|
116 |
### Real-World Datasets (Classification & Pose Estimation)
|
117 |
|
118 |
+
Download the classification dataset:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
119 |
|
120 |
```bash
|
121 |
+
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/classification_dataset.zip
|
122 |
unzip classification_dataset.zip -d your_desired_directory
|
123 |
+
rm classification_dataset.zip
|
124 |
```
|
125 |
|
126 |
+
Download the pose estimation dataset:
|
127 |
|
128 |
+
```bash
|
129 |
+
wget https://huggingface.co/datasets/hgupt3/sitr_dataset/resolve/main/pose_dataset.zip
|
130 |
+
unzip pose_dataset.zip -d your_desired_directory
|
131 |
+
rm pose_dataset.zip
|
132 |
+
```
|
133 |
|
134 |
+
Each dataset contains:
|
135 |
+
- `train_set/` (80% of the data)
|
136 |
+
- `test_set/` (20% of the data)
|
137 |
|
138 |
---
|
139 |
|
140 |
## File Structure
|
141 |
|
|
|
|
|
142 |
### 1. Simulated Tactile Dataset
|
143 |
|
144 |
```
|
|
|
184 |
|
185 |
### 3. Pose Estimation Dataset
|
186 |
|
187 |
+
Each of the `train_set/` and `test_set/` directories is structured as follows:
|
188 |
|
189 |
```
|
190 |
train_set/ (or test_set/)
|
|
|
206 |
|
207 |
## Citation
|
208 |
|
209 |
+
If you use this dataset or model weights in your research, please cite:
|
210 |
|
211 |
```bibtex
|
212 |
@misc{gupta2025sensorinvarianttactilerepresentation,
|
213 |
+
title={Sensor-Invariant Tactile Representation},
|
214 |
author={Harsh Gupta and Yuchen Mo and Shengmiao Jin and Wenzhen Yuan},
|
215 |
year={2025},
|
216 |
eprint={2502.19638},
|
217 |
archivePrefix={arXiv},
|
218 |
primaryClass={cs.RO},
|
219 |
+
url={https://arxiv.org/abs/2502.19638}
|
220 |
}
|
221 |
```
|
222 |
|
|
|
224 |
|
225 |
## License
|
226 |
|
227 |
+
This dataset and model weights are licensed under the MIT License. See the LICENSE file for details.
|
228 |
|
229 |
If you have any questions or need further clarification, please feel free to reach out.
|