Datasets:
include visualizations
Browse files- README.md +31 -37
- data/metadata/categories.json +13 -3
- data/metadata/dataset_info.json +12 -3
- data/metadata/furniture_index.json +0 -0
- data/metadata/query_index.json +0 -0
- data/metadata/scene_index.json +0 -0
- data/metadata/styles.json +13 -3
- uncompress_dataset.sh +60 -12
- visualizations/example_0.html +0 -0
- visualizations/example_1.html +0 -0
- visualizations/example_2.html +0 -0
- visualizations/example_3.html +0 -0
- visualizations/example_4.html +0 -0
- visualizations/index.html +78 -0
- visualizations/overview.pdf +0 -0
- visualizations/overview.png +3 -0
- visualize_html.py +292 -0
README.md
CHANGED
@@ -8,11 +8,14 @@ size_categories:
|
|
8 |
---
|
9 |
# DeepFurniture Dataset
|
10 |
|
11 |
-
A large-scale dataset for furniture understanding, featuring **photo-realistic rendered indoor scenes** with **high-quality 3D furniture models**. The dataset contains about 24k indoor images, 170k furniture instances, and 20k unique furniture identities, all rendered by the leading industry-level rendering engines in [Kujiale](https://coohom.com).
|
12 |
-
|
13 |
This dataset is introduced in our paper:
|
14 |
[Furnishing Your Room by What You See: An End-to-End Furniture Set Retrieval Framework with Rich Annotated Benchmark Dataset](https://arxiv.org/abs/1911.09299)
|
15 |
|
|
|
|
|
|
|
|
|
|
|
16 |
## Key Features
|
17 |
|
18 |
- **Photo-Realistic Rendering**: All indoor scenes are rendered using professional rendering engines, providing realistic lighting, shadows, and textures
|
@@ -34,31 +37,21 @@ DeepFurniture provides hierarchical annotations at three levels:
|
|
34 |
- Categories: 11 furniture types
|
35 |
- Style tags: 11 different styles
|
36 |
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
2. Country
|
53 |
-
3. European/American
|
54 |
-
4. Chinese
|
55 |
-
5. Japanese
|
56 |
-
6. Mediterranean
|
57 |
-
7. Southeast-Asian
|
58 |
-
8. Nordic
|
59 |
-
9. Industrial
|
60 |
-
10. Eclectic
|
61 |
-
11. Other
|
62 |
|
63 |
## Dataset Structure
|
64 |
|
@@ -85,9 +78,13 @@ data/
|
|
85 |
# Clone the repository
|
86 |
git lfs install # Make sure Git LFS is installed
|
87 |
git clone https://huggingface.co/datasets/byliu/DeepFurniture
|
|
|
88 |
|
89 |
-
|
90 |
-
|
|
|
|
|
|
|
91 |
```
|
92 |
|
93 |
### 2. Data Format
|
@@ -132,7 +129,7 @@ git clone https://huggingface.co/datasets/byliu/DeepFurniture
|
|
132 |
from deepfurniture import DeepFurnitureDataset
|
133 |
|
134 |
# Initialize dataset
|
135 |
-
dataset = DeepFurnitureDataset("path/to/
|
136 |
|
137 |
# Access a scene
|
138 |
scene = dataset[0]
|
@@ -145,14 +142,11 @@ for instance in scene['instances']:
|
|
145 |
print(f"Style(s): {instance['style_names']}")
|
146 |
```
|
147 |
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
2. Furniture Instance Retrieval
|
153 |
-
3. Furniture Retrieval
|
154 |
|
155 |
-
For benchmark details and baselines, please refer to our paper.
|
156 |
|
157 |
|
158 |
## License
|
|
|
8 |
---
|
9 |
# DeepFurniture Dataset
|
10 |
|
|
|
|
|
11 |
This dataset is introduced in our paper:
|
12 |
[Furnishing Your Room by What You See: An End-to-End Furniture Set Retrieval Framework with Rich Annotated Benchmark Dataset](https://arxiv.org/abs/1911.09299)
|
13 |
|
14 |
+
<img src="visualizations/overview.png" width="100%"/>
|
15 |
+
|
16 |
+
A large-scale dataset for furniture understanding, featuring **photo-realistic rendered indoor scenes** with **high-quality 3D furniture models**. The dataset contains about 24k indoor images, 170k furniture instances, and 20k unique furniture identities, all rendered by the leading industry-level rendering engines in [Kujiale](https://coohom.com).
|
17 |
+
|
18 |
+
|
19 |
## Key Features
|
20 |
|
21 |
- **Photo-Realistic Rendering**: All indoor scenes are rendered using professional rendering engines, providing realistic lighting, shadows, and textures
|
|
|
37 |
- Categories: 11 furniture types
|
38 |
- Style tags: 11 different styles
|
39 |
|
40 |
+
## Dataset Visualization
|
41 |
+
|
42 |
+
You can view example visualizations of the dataset [here](visualizations/index.html). These examples show:
|
43 |
+
- Scene images with instance annotations
|
44 |
+
- Depth maps
|
45 |
+
- Furniture instance details
|
46 |
+
|
47 |
+
## Benchmarks
|
48 |
+
|
49 |
+
This dataset supports three main benchmarks:
|
50 |
+
1. Furniture Detection/Segmentation
|
51 |
+
2. Furniture Instance Retrieval
|
52 |
+
3. Furniture Retrieval
|
53 |
+
|
54 |
+
For benchmark details and baselines, please refer to our paper.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
## Dataset Structure
|
57 |
|
|
|
78 |
# Clone the repository
|
79 |
git lfs install # Make sure Git LFS is installed
|
80 |
git clone https://huggingface.co/datasets/byliu/DeepFurniture
|
81 |
+
```
|
82 |
|
83 |
+
[optional] Uncompress the dataset. The current dataset loader is only available for uncompressed assets. So, if you want to use the provided dataset loader, you'll need the ucnompress the dataset firstly.
|
84 |
+
The dataset loader for compressed assets is TBD.
|
85 |
+
```
|
86 |
+
cd DeepFurniture
|
87 |
+
bash uncompress_dataset.sh -s data -t uncompressed_data
|
88 |
```
|
89 |
|
90 |
### 2. Data Format
|
|
|
129 |
from deepfurniture import DeepFurnitureDataset
|
130 |
|
131 |
# Initialize dataset
|
132 |
+
dataset = DeepFurnitureDataset("path/to/uncompressed_data")
|
133 |
|
134 |
# Access a scene
|
135 |
scene = dataset[0]
|
|
|
142 |
print(f"Style(s): {instance['style_names']}")
|
143 |
```
|
144 |
|
145 |
+
### 4. To visualize each indoor scene
|
146 |
+
```
|
147 |
+
python visualize_html.py --dataset ./uncompressed_data --scene_idx 101 --output scene_101.html
|
148 |
+
```
|
|
|
|
|
149 |
|
|
|
150 |
|
151 |
|
152 |
## License
|
data/metadata/categories.json
CHANGED
@@ -1,3 +1,13 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"1": "cabinet#shelf",
|
3 |
+
"2": "table",
|
4 |
+
"3": "chair#stool",
|
5 |
+
"4": "lamp",
|
6 |
+
"5": "door",
|
7 |
+
"6": "bed",
|
8 |
+
"7": "sofa",
|
9 |
+
"8": "plant",
|
10 |
+
"9": "decoration",
|
11 |
+
"10": "curtain",
|
12 |
+
"11": "home-appliance"
|
13 |
+
}
|
data/metadata/dataset_info.json
CHANGED
@@ -1,3 +1,12 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"name": "DeepFurniture",
|
3 |
+
"version": "1.0.0",
|
4 |
+
"description": "A large-scale dataset for furniture understanding with rich annotations",
|
5 |
+
"citation": "Add citation here",
|
6 |
+
"license": "Add license here",
|
7 |
+
"statistics": {
|
8 |
+
"num_scenes": 24182,
|
9 |
+
"num_furnitures": 24742,
|
10 |
+
"num_queries": 7264
|
11 |
+
}
|
12 |
+
}
|
data/metadata/furniture_index.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/metadata/query_index.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/metadata/scene_index.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
data/metadata/styles.json
CHANGED
@@ -1,3 +1,13 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"1": "modern",
|
3 |
+
"2": "country",
|
4 |
+
"3": "European#American",
|
5 |
+
"4": "Chinese",
|
6 |
+
"5": "Japanese",
|
7 |
+
"6": "Mediterranean",
|
8 |
+
"7": "Southeast-Asian",
|
9 |
+
"8": "Nordic",
|
10 |
+
"9": "Industrial",
|
11 |
+
"10": "electic",
|
12 |
+
"11": "other"
|
13 |
+
}
|
uncompress_dataset.sh
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
|
3 |
# Usage function
|
4 |
usage() {
|
5 |
-
echo "Usage: $0 -s SOURCE_DIR -t TARGET_DIR [-c CHUNK_TYPE] [-h]"
|
6 |
echo "Uncompress chunked DeepFurniture dataset"
|
7 |
echo ""
|
8 |
echo "Required arguments:"
|
@@ -12,16 +12,18 @@ usage() {
|
|
12 |
echo "Optional arguments:"
|
13 |
echo " -c CHUNK_TYPE Specific chunk type to process (scenes, furnitures, queries)"
|
14 |
echo " If not specified, all chunk types will be processed"
|
|
|
15 |
echo " -h Show this help message"
|
16 |
exit 1
|
17 |
}
|
18 |
|
19 |
# Process command line arguments
|
20 |
-
while getopts "s:t:c:h" opt; do
|
21 |
case $opt in
|
22 |
s) SOURCE_DIR="$OPTARG";;
|
23 |
t) TARGET_DIR="$OPTARG";;
|
24 |
c) CHUNK_TYPE="$OPTARG";;
|
|
|
25 |
h) usage;;
|
26 |
?) usage;;
|
27 |
esac
|
@@ -39,6 +41,15 @@ if [ ! -d "$SOURCE_DIR" ]; then
|
|
39 |
exit 1
|
40 |
fi
|
41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
# Create target directory structure
|
43 |
mkdir -p "$TARGET_DIR"/{metadata,scenes,furnitures,queries}
|
44 |
|
@@ -62,21 +73,32 @@ process_chunks() {
|
|
62 |
if [ ! -d "$src_dir" ]; then
|
63 |
echo "Warning: Directory not found: $src_dir"
|
64 |
return
|
65 |
-
|
|
|
|
|
|
|
|
|
66 |
|
67 |
-
# Count total chunks for progress
|
68 |
-
total_chunks=$(ls "$src_dir"/*.tar.gz 2>/dev/null | wc -l)
|
69 |
if [ "$total_chunks" -eq 0 ]; then
|
70 |
echo "No chunks found in $src_dir"
|
71 |
return
|
72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
73 |
|
74 |
-
# Process
|
75 |
-
|
76 |
-
|
77 |
-
current=$((current + 1))
|
78 |
chunk_name=$(basename "$chunk")
|
79 |
-
printf "Extracting %s (%d/%d)..." "$chunk_name"
|
80 |
|
81 |
if tar -xzf "$chunk" -C "$target_dir" 2>/dev/null; then
|
82 |
echo " done"
|
@@ -84,6 +106,21 @@ process_chunks() {
|
|
84 |
echo " failed"
|
85 |
echo "Warning: Failed to extract $chunk_name"
|
86 |
fi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
done
|
88 |
}
|
89 |
|
@@ -112,8 +149,10 @@ echo -e "\nValidating extracted files..."
|
|
112 |
# Check scenes
|
113 |
if [ -z "$CHUNK_TYPE" ] || [ "$CHUNK_TYPE" = "scenes" ]; then
|
114 |
missing_files=0
|
|
|
115 |
for scene_dir in "$TARGET_DIR"/scenes/*; do
|
116 |
if [ -d "$scene_dir" ]; then
|
|
|
117 |
for required in "image.jpg" "annotation.json"; do
|
118 |
if [ ! -f "$scene_dir/$required" ]; then
|
119 |
echo "Warning: Missing $required in $(basename "$scene_dir")"
|
@@ -122,8 +161,17 @@ if [ -z "$CHUNK_TYPE" ] || [ "$CHUNK_TYPE" = "scenes" ]; then
|
|
122 |
done
|
123 |
fi
|
124 |
done
|
125 |
-
echo "Scene validation complete. Missing files: $missing_files"
|
126 |
fi
|
127 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
echo "Dataset uncompression completed!"
|
129 |
echo "Output directory: $TARGET_DIR"
|
|
|
2 |
|
3 |
# Usage function
|
4 |
usage() {
|
5 |
+
echo "Usage: $0 -s SOURCE_DIR -t TARGET_DIR [-c CHUNK_TYPE] [-m MAX_FILES] [-h]"
|
6 |
echo "Uncompress chunked DeepFurniture dataset"
|
7 |
echo ""
|
8 |
echo "Required arguments:"
|
|
|
12 |
echo "Optional arguments:"
|
13 |
echo " -c CHUNK_TYPE Specific chunk type to process (scenes, furnitures, queries)"
|
14 |
echo " If not specified, all chunk types will be processed"
|
15 |
+
echo " -m MAX_FILES Maximum number of files to process per type (default: process all)"
|
16 |
echo " -h Show this help message"
|
17 |
exit 1
|
18 |
}
|
19 |
|
20 |
# Process command line arguments
|
21 |
+
while getopts "s:t:c:m:h" opt; do
|
22 |
case $opt in
|
23 |
s) SOURCE_DIR="$OPTARG";;
|
24 |
t) TARGET_DIR="$OPTARG";;
|
25 |
c) CHUNK_TYPE="$OPTARG";;
|
26 |
+
m) MAX_FILES="$OPTARG";;
|
27 |
h) usage;;
|
28 |
?) usage;;
|
29 |
esac
|
|
|
41 |
exit 1
|
42 |
fi
|
43 |
|
44 |
+
# Validate MAX_FILES if provided
|
45 |
+
if [ -n "$MAX_FILES" ]; then
|
46 |
+
if ! [[ "$MAX_FILES" =~ ^[0-9]+$ ]]; then
|
47 |
+
echo "Error: MAX_FILES must be a positive integer"
|
48 |
+
exit 1
|
49 |
+
fi
|
50 |
+
echo "Will process maximum $MAX_FILES files per type"
|
51 |
+
fi
|
52 |
+
|
53 |
# Create target directory structure
|
54 |
mkdir -p "$TARGET_DIR"/{metadata,scenes,furnitures,queries}
|
55 |
|
|
|
73 |
if [ ! -d "$src_dir" ]; then
|
74 |
echo "Warning: Directory not found: $src_dir"
|
75 |
return
|
76 |
+
fi
|
77 |
+
|
78 |
+
# Get list of chunks and sort them
|
79 |
+
chunks=($(ls -v "$src_dir"/*.tar.gz 2>/dev/null))
|
80 |
+
total_chunks=${#chunks[@]}
|
81 |
|
|
|
|
|
82 |
if [ "$total_chunks" -eq 0 ]; then
|
83 |
echo "No chunks found in $src_dir"
|
84 |
return
|
85 |
+
fi
|
86 |
+
|
87 |
+
# Determine how many chunks to process based on MAX_FILES
|
88 |
+
files_per_chunk=1000 # Default files per chunk based on dataset structure
|
89 |
+
if [ -n "$MAX_FILES" ]; then
|
90 |
+
chunks_needed=$(( (MAX_FILES + files_per_chunk - 1) / files_per_chunk ))
|
91 |
+
if [ "$chunks_needed" -lt "$total_chunks" ]; then
|
92 |
+
total_chunks=$chunks_needed
|
93 |
+
echo "Limiting to $total_chunks chunks ($MAX_FILES files) for $type"
|
94 |
+
fi
|
95 |
+
fi
|
96 |
|
97 |
+
# Process chunks
|
98 |
+
for ((i = 0; i < total_chunks; i++)); do
|
99 |
+
chunk="${chunks[$i]}"
|
|
|
100 |
chunk_name=$(basename "$chunk")
|
101 |
+
printf "Extracting %s (%d/%d)..." "$chunk_name" $((i + 1)) "$total_chunks"
|
102 |
|
103 |
if tar -xzf "$chunk" -C "$target_dir" 2>/dev/null; then
|
104 |
echo " done"
|
|
|
106 |
echo " failed"
|
107 |
echo "Warning: Failed to extract $chunk_name"
|
108 |
fi
|
109 |
+
|
110 |
+
# If this is the last chunk and we have MAX_FILES set,
|
111 |
+
# we might need to remove excess files
|
112 |
+
if [ -n "$MAX_FILES" ] && [ "$i" -eq "$((total_chunks - 1))" ]; then
|
113 |
+
# Calculate how many files we should have
|
114 |
+
local expected_total=$MAX_FILES
|
115 |
+
local current_total=$(ls "$target_dir" | wc -l)
|
116 |
+
|
117 |
+
if [ "$current_total" -gt "$expected_total" ]; then
|
118 |
+
echo "Trimming excess files to meet MAX_FILES limit..."
|
119 |
+
# Remove excess files (keeping the first MAX_FILES files)
|
120 |
+
ls "$target_dir" | sort | tail -n+"$((expected_total + 1))" | \
|
121 |
+
xargs -I {} rm -rf "$target_dir/{}"
|
122 |
+
fi
|
123 |
+
fi
|
124 |
done
|
125 |
}
|
126 |
|
|
|
149 |
# Check scenes
|
150 |
if [ -z "$CHUNK_TYPE" ] || [ "$CHUNK_TYPE" = "scenes" ]; then
|
151 |
missing_files=0
|
152 |
+
total_scenes=0
|
153 |
for scene_dir in "$TARGET_DIR"/scenes/*; do
|
154 |
if [ -d "$scene_dir" ]; then
|
155 |
+
total_scenes=$((total_scenes + 1))
|
156 |
for required in "image.jpg" "annotation.json"; do
|
157 |
if [ ! -f "$scene_dir/$required" ]; then
|
158 |
echo "Warning: Missing $required in $(basename "$scene_dir")"
|
|
|
161 |
done
|
162 |
fi
|
163 |
done
|
164 |
+
echo "Scene validation complete. Processed $total_scenes scenes. Missing files: $missing_files"
|
165 |
fi
|
166 |
|
167 |
+
# Print final statistics
|
168 |
+
echo -e "\nExtraction Summary:"
|
169 |
+
for type in scenes furnitures queries; do
|
170 |
+
if [ -z "$CHUNK_TYPE" ] || [ "$CHUNK_TYPE" = "$type" ]; then
|
171 |
+
file_count=$(find "$TARGET_DIR/$type" -type f | wc -l)
|
172 |
+
echo "$type: $file_count files"
|
173 |
+
fi
|
174 |
+
done
|
175 |
+
|
176 |
echo "Dataset uncompression completed!"
|
177 |
echo "Output directory: $TARGET_DIR"
|
visualizations/example_0.html
ADDED
The diff for this file is too large to render.
See raw diff
|
|
visualizations/example_1.html
ADDED
The diff for this file is too large to render.
See raw diff
|
|
visualizations/example_2.html
ADDED
The diff for this file is too large to render.
See raw diff
|
|
visualizations/example_3.html
ADDED
The diff for this file is too large to render.
See raw diff
|
|
visualizations/example_4.html
ADDED
The diff for this file is too large to render.
See raw diff
|
|
visualizations/index.html
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
<!DOCTYPE html>
|
3 |
+
<html>
|
4 |
+
<head>
|
5 |
+
<title>DeepFurniture Dataset Visualizations</title>
|
6 |
+
<style>
|
7 |
+
body {
|
8 |
+
font-family: Arial;
|
9 |
+
max-width: 1200px;
|
10 |
+
margin: 0 auto;
|
11 |
+
padding: 20px;
|
12 |
+
}
|
13 |
+
.example-grid {
|
14 |
+
display: grid;
|
15 |
+
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
|
16 |
+
gap: 20px;
|
17 |
+
margin: 20px 0;
|
18 |
+
}
|
19 |
+
.example-card {
|
20 |
+
border: 1px solid #ddd;
|
21 |
+
border-radius: 8px;
|
22 |
+
padding: 15px;
|
23 |
+
text-decoration: none;
|
24 |
+
color: inherit;
|
25 |
+
transition: transform 0.2s;
|
26 |
+
}
|
27 |
+
.example-card:hover {
|
28 |
+
transform: translateY(-5px);
|
29 |
+
box-shadow: 0 4px 8px rgba(0,0,0,0.1);
|
30 |
+
}
|
31 |
+
h1 { text-align: center; }
|
32 |
+
</style>
|
33 |
+
</head>
|
34 |
+
<body>
|
35 |
+
<h1>DeepFurniture Dataset Visualizations</h1>
|
36 |
+
<p>This page shows example visualizations from the DeepFurniture dataset. Click on any example to view the full visualization.</p>
|
37 |
+
|
38 |
+
<div class="example-grid">
|
39 |
+
|
40 |
+
<a href="example_0.html" class="example-card">
|
41 |
+
<h3>Scene 1</h3>
|
42 |
+
<p>Scene ID: DVD3MBOCEJLMIK6A573WKUY8</p>
|
43 |
+
<p>Number of instances: 11</p>
|
44 |
+
<p>Click to view details →</p>
|
45 |
+
</a>
|
46 |
+
|
47 |
+
<a href="example_1.html" class="example-card">
|
48 |
+
<h3>Scene 2</h3>
|
49 |
+
<p>Scene ID: DVD3MHYMEJI4OKYQBT3WKSY8</p>
|
50 |
+
<p>Number of instances: 4</p>
|
51 |
+
<p>Click to view details →</p>
|
52 |
+
</a>
|
53 |
+
|
54 |
+
<a href="example_2.html" class="example-card">
|
55 |
+
<h3>Scene 3</h3>
|
56 |
+
<p>Scene ID: DVD5FHHPEJJ4ESRZRX3WKUQ8</p>
|
57 |
+
<p>Number of instances: 8</p>
|
58 |
+
<p>Click to view details →</p>
|
59 |
+
</a>
|
60 |
+
|
61 |
+
<a href="example_3.html" class="example-card">
|
62 |
+
<h3>Scene 4</h3>
|
63 |
+
<p>Scene ID: DVD7IYG2EJLMOK6VYL3WKVY8</p>
|
64 |
+
<p>Number of instances: 15</p>
|
65 |
+
<p>Click to view details →</p>
|
66 |
+
</a>
|
67 |
+
|
68 |
+
<a href="example_4.html" class="example-card">
|
69 |
+
<h3>Scene 5</h3>
|
70 |
+
<p>Scene ID: DVDZU2DAEJI4OK6KYT3WKRA8</p>
|
71 |
+
<p>Number of instances: 6</p>
|
72 |
+
<p>Click to view details →</p>
|
73 |
+
</a>
|
74 |
+
|
75 |
+
</div>
|
76 |
+
</body>
|
77 |
+
</html>
|
78 |
+
|
visualizations/overview.pdf
ADDED
Binary file (461 kB). View file
|
|
visualizations/overview.png
ADDED
Git LFS Details
|
visualize_html.py
ADDED
@@ -0,0 +1,292 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import numpy as np
|
2 |
+
from PIL import Image, ImageDraw, ImageFont
|
3 |
+
import base64
|
4 |
+
import io
|
5 |
+
from deepfurniture import DeepFurnitureDataset
|
6 |
+
from pycocotools import mask as mask_utils
|
7 |
+
|
8 |
+
def save_image_base64(image):
|
9 |
+
"""Convert PIL image to base64 string."""
|
10 |
+
buffered = io.BytesIO()
|
11 |
+
image.save(buffered, format="JPEG", quality=90)
|
12 |
+
return base64.b64encode(buffered.getvalue()).decode()
|
13 |
+
|
14 |
+
|
15 |
+
def create_instance_visualization(scene_data):
|
16 |
+
"""Create combined instance visualization with both masks and bboxes."""
|
17 |
+
image = scene_data['image']
|
18 |
+
instances = scene_data['instances']
|
19 |
+
|
20 |
+
# Image dimensions for boundary checking
|
21 |
+
img_width, img_height = image.size
|
22 |
+
|
23 |
+
# Start with image at half opacity
|
24 |
+
vis_img = np.array(image, dtype=np.float32) * 0.5
|
25 |
+
|
26 |
+
# Get all segmentations
|
27 |
+
segmentations = []
|
28 |
+
for inst in instances:
|
29 |
+
if inst['segmentation']:
|
30 |
+
rle = {
|
31 |
+
'counts': inst['segmentation'],
|
32 |
+
'size': [img_height, img_width]
|
33 |
+
}
|
34 |
+
segmentations.append(rle)
|
35 |
+
|
36 |
+
# Create color map for instances with distinct colors
|
37 |
+
colors = np.array([
|
38 |
+
[0.9, 0.1, 0.1], # Red
|
39 |
+
[0.1, 0.9, 0.1], # Green
|
40 |
+
[0.1, 0.1, 0.9], # Blue
|
41 |
+
[0.9, 0.9, 0.1], # Yellow
|
42 |
+
[0.9, 0.1, 0.9], # Magenta
|
43 |
+
[0.1, 0.9, 0.9], # Cyan
|
44 |
+
[0.9, 0.5, 0.1], # Orange
|
45 |
+
[0.5, 0.9, 0.1], # Lime
|
46 |
+
[0.5, 0.1, 0.9], # Purple
|
47 |
+
])
|
48 |
+
colors = np.tile(colors, (len(instances) // len(colors) + 1, 1))[:len(instances)]
|
49 |
+
|
50 |
+
# Draw instance masks with higher opacity
|
51 |
+
if segmentations:
|
52 |
+
if isinstance(segmentations[0]['counts'], (list, tuple)):
|
53 |
+
segmentations = mask_utils.frPyObjects(
|
54 |
+
segmentations, img_height, img_width
|
55 |
+
)
|
56 |
+
masks = mask_utils.decode(segmentations)
|
57 |
+
|
58 |
+
for idx in range(masks.shape[2]):
|
59 |
+
color = colors[idx]
|
60 |
+
mask = masks[:, :, idx]
|
61 |
+
for c in range(3):
|
62 |
+
vis_img[:, :, c] += mask * np.array(image)[:, :, c] * 0.7 * color[c]
|
63 |
+
|
64 |
+
# Convert to PIL for drawing bounding boxes
|
65 |
+
vis_img = Image.fromarray(np.uint8(np.clip(vis_img, 0, 255)))
|
66 |
+
draw = ImageDraw.Draw(vis_img)
|
67 |
+
|
68 |
+
# Try to load a font for better text rendering
|
69 |
+
try:
|
70 |
+
font = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 20)
|
71 |
+
except:
|
72 |
+
try:
|
73 |
+
font = ImageFont.truetype("/System/Library/Fonts/Helvetica.ttc", 20)
|
74 |
+
except:
|
75 |
+
font = ImageFont.load_default()
|
76 |
+
|
77 |
+
# Constants for text and box drawing
|
78 |
+
text_padding = 4
|
79 |
+
text_height = 24
|
80 |
+
text_width = 200
|
81 |
+
corner_length = 20
|
82 |
+
|
83 |
+
# Draw bounding boxes with labels
|
84 |
+
for idx, (instance, color) in enumerate(zip(instances, colors)):
|
85 |
+
bbox = instance['bounding_box']
|
86 |
+
color_tuple = tuple(int(c * 255) for c in color)
|
87 |
+
|
88 |
+
# Calculate label
|
89 |
+
furniture_id = instance['identity_id']
|
90 |
+
category = instance['category_name']
|
91 |
+
label = f"{category} ({furniture_id})"
|
92 |
+
|
93 |
+
# Draw bbox with double lines for better visibility
|
94 |
+
for offset in [2, 1]:
|
95 |
+
draw.rectangle([
|
96 |
+
max(0, bbox['xmin'] - offset),
|
97 |
+
max(0, bbox['ymin'] - offset),
|
98 |
+
min(img_width - 1, bbox['xmax'] + offset),
|
99 |
+
min(img_height - 1, bbox['ymax'] + offset)
|
100 |
+
], outline=color_tuple, width=2)
|
101 |
+
|
102 |
+
# Determine text position (handle boundary cases)
|
103 |
+
# First try above the bbox
|
104 |
+
text_y = bbox['ymin'] - text_height - text_padding
|
105 |
+
if text_y < 0: # If no space above, try below
|
106 |
+
text_y = bbox['ymax'] + text_padding
|
107 |
+
|
108 |
+
# Handle x position
|
109 |
+
text_x = bbox['xmin']
|
110 |
+
# If text would go beyond right edge, align to right edge
|
111 |
+
if text_x + text_width > img_width:
|
112 |
+
text_x = max(0, img_width - text_width)
|
113 |
+
|
114 |
+
# Draw background for text
|
115 |
+
text_pos = (text_x, text_y)
|
116 |
+
draw.rectangle([
|
117 |
+
text_pos[0] - 2,
|
118 |
+
text_pos[1] - 2,
|
119 |
+
min(img_width - 1, text_pos[0] + text_width),
|
120 |
+
min(img_height - 1, text_pos[1] + text_height)
|
121 |
+
], fill='black')
|
122 |
+
|
123 |
+
# Draw text
|
124 |
+
draw.text(text_pos, label, fill=color_tuple, font=font)
|
125 |
+
|
126 |
+
# Add corner markers with boundary checking
|
127 |
+
corners = [
|
128 |
+
(bbox['xmin'], bbox['ymin']), # Top-left
|
129 |
+
(bbox['xmax'], bbox['ymin']), # Top-right
|
130 |
+
(bbox['xmin'], bbox['ymax']), # Bottom-left
|
131 |
+
(bbox['xmax'], bbox['ymax']) # Bottom-right
|
132 |
+
]
|
133 |
+
|
134 |
+
for x, y in corners:
|
135 |
+
# Ensure corner markers stay within image bounds
|
136 |
+
# Horizontal lines
|
137 |
+
x1 = max(0, x - corner_length)
|
138 |
+
x2 = min(img_width - 1, x + corner_length)
|
139 |
+
draw.line([(x1, y), (x2, y)], fill=color_tuple, width=3)
|
140 |
+
|
141 |
+
# Vertical lines
|
142 |
+
y1 = max(0, y - corner_length)
|
143 |
+
y2 = min(img_height - 1, y + corner_length)
|
144 |
+
draw.line([(x, y1), (x, y2)], fill=color_tuple, width=3)
|
145 |
+
|
146 |
+
return vis_img
|
147 |
+
|
148 |
+
|
149 |
+
def process_depth_map(depth_image):
|
150 |
+
"""Process depth map for better visualization.
|
151 |
+
|
152 |
+
Args:
|
153 |
+
depth_image: PIL Image of depth map
|
154 |
+
Returns:
|
155 |
+
Processed depth map as PIL Image
|
156 |
+
"""
|
157 |
+
# Convert to numpy array
|
158 |
+
depth = np.array(depth_image)
|
159 |
+
|
160 |
+
# Normalize depth to 0-1 range
|
161 |
+
if depth.max() > depth.min():
|
162 |
+
depth = (depth - depth.min()) / (depth.max() - depth.min())
|
163 |
+
|
164 |
+
# Apply colormap (viridis-like)
|
165 |
+
colored_depth = np.zeros((*depth.shape, 3))
|
166 |
+
colored_depth[..., 0] = (1 - depth) * 0.4 # Red channel
|
167 |
+
colored_depth[..., 1] = np.abs(depth - 0.5) * 0.8 # Green channel
|
168 |
+
colored_depth[..., 2] = depth * 0.8 # Blue channel
|
169 |
+
|
170 |
+
# Convert to uint8 and then to PIL
|
171 |
+
colored_depth = (colored_depth * 255).astype(np.uint8)
|
172 |
+
return Image.fromarray(colored_depth)
|
173 |
+
|
174 |
+
|
175 |
+
def visualize_html(dataset, scene_idx, output_path='scene.html'):
|
176 |
+
"""Generate HTML visualization for a scene."""
|
177 |
+
scene_data = dataset[scene_idx]
|
178 |
+
|
179 |
+
# Create visualizations
|
180 |
+
instance_vis = create_instance_visualization(scene_data)
|
181 |
+
|
182 |
+
depth_vis = None
|
183 |
+
if scene_data['depth']:
|
184 |
+
depth_vis = process_depth_map(scene_data['depth'])
|
185 |
+
# Get base64 encoded images
|
186 |
+
scene_img = save_image_base64(scene_data['image'])
|
187 |
+
instance_vis = save_image_base64(instance_vis)
|
188 |
+
depth_img = save_image_base64(depth_vis) if depth_vis else None
|
189 |
+
|
190 |
+
# Create HTML with minimal CSS
|
191 |
+
html = f'''
|
192 |
+
<html>
|
193 |
+
<head>
|
194 |
+
<style>
|
195 |
+
body {{ font-family: Arial; margin: 20px; max-width: 2000px; margin: 0 auto; }}
|
196 |
+
.grid {{ display: grid; grid-template-columns: repeat(auto-fill, minmax(300px, 1fr)); gap: 20px; }}
|
197 |
+
.main-images {{
|
198 |
+
grid-template-columns: repeat(auto-fit, minmax(800px, 1fr));
|
199 |
+
margin: 20px 0;
|
200 |
+
}}
|
201 |
+
.card {{
|
202 |
+
border: 1px solid #ddd;
|
203 |
+
padding: 15px;
|
204 |
+
border-radius: 8px;
|
205 |
+
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
|
206 |
+
}}
|
207 |
+
.card h3 {{
|
208 |
+
font-size: 20px;
|
209 |
+
margin-bottom: 15px;
|
210 |
+
}}
|
211 |
+
img {{ max-width: 100%; height: auto; }}
|
212 |
+
h1 {{
|
213 |
+
color: #333;
|
214 |
+
font-size: 32px;
|
215 |
+
text-align: center;
|
216 |
+
margin: 30px 0;
|
217 |
+
}}
|
218 |
+
h2 {{
|
219 |
+
color: #333;
|
220 |
+
font-size: 28px;
|
221 |
+
margin: 25px 0;
|
222 |
+
}}
|
223 |
+
.instance-info {{
|
224 |
+
color: #444;
|
225 |
+
font-size: 16px;
|
226 |
+
line-height: 1.4;
|
227 |
+
}}
|
228 |
+
.main-images img {{
|
229 |
+
width: 100%;
|
230 |
+
object-fit: contain;
|
231 |
+
max-height: 800px; /* Increased max height */
|
232 |
+
}}
|
233 |
+
</style>
|
234 |
+
</head>
|
235 |
+
<body>
|
236 |
+
<h1>Scene ID: {scene_data['scene_id']}</h1>
|
237 |
+
|
238 |
+
<h2>Scene Visualizations</h2>
|
239 |
+
<div class="grid main-images">
|
240 |
+
<div class="card">
|
241 |
+
<h3>Original Scene</h3>
|
242 |
+
<img src="data:image/png;base64,{scene_img}">
|
243 |
+
</div>
|
244 |
+
<div class="card">
|
245 |
+
<h3>Instance Visualization (Masks + Bboxes)</h3>
|
246 |
+
<img src="data:image/png;base64,{instance_vis}">
|
247 |
+
</div>
|
248 |
+
{f'<div class="card"><h3>Depth Map</h3><img src="data:image/png;base64,{depth_img}"></div>' if depth_img else ''}
|
249 |
+
</div>
|
250 |
+
|
251 |
+
<h2>Furniture Instances</h2>
|
252 |
+
<div class="grid">
|
253 |
+
'''
|
254 |
+
|
255 |
+
# Add furniture previews
|
256 |
+
for instance in scene_data['instances']:
|
257 |
+
furniture_id = str(instance['identity_id'])
|
258 |
+
if furniture_id in scene_data['furniture_previews']:
|
259 |
+
preview = save_image_base64(scene_data['furniture_previews'][furniture_id])
|
260 |
+
bbox = instance['bounding_box']
|
261 |
+
|
262 |
+
html += f'''
|
263 |
+
<div class="card">
|
264 |
+
<h3>{instance['category_name']} (ID: {furniture_id})</h3>
|
265 |
+
<div class="instance-info">
|
266 |
+
<p>Style: {', '.join(instance['style_names'])}</p>
|
267 |
+
<p>BBox: ({bbox['xmin']}, {bbox['ymin']}, {bbox['xmax']}, {bbox['ymax']})</p>
|
268 |
+
</div>
|
269 |
+
<img src="data:image/png;base64,{preview}">
|
270 |
+
</div>
|
271 |
+
'''
|
272 |
+
|
273 |
+
html += '''
|
274 |
+
</div>
|
275 |
+
</body>
|
276 |
+
</html>
|
277 |
+
'''
|
278 |
+
|
279 |
+
with open(output_path, 'w') as f:
|
280 |
+
f.write(html)
|
281 |
+
print(f"Visualization saved to {output_path}")
|
282 |
+
|
283 |
+
if __name__ == '__main__':
|
284 |
+
import argparse
|
285 |
+
parser = argparse.ArgumentParser()
|
286 |
+
parser.add_argument('--dataset', required=True)
|
287 |
+
parser.add_argument('--scene_idx', type=int, required=True)
|
288 |
+
parser.add_argument('--output', default='scene.html')
|
289 |
+
args = parser.parse_args()
|
290 |
+
|
291 |
+
dataset = DeepFurnitureDataset(args.dataset)
|
292 |
+
visualize_html(dataset, args.scene_idx, args.output)
|