xixu-me commited on
Commit
4eb5246
·
1 Parent(s): 08fe0f3

Add dataset metadata and update configuration files for FSL Product Classification dataset

Browse files
Files changed (5) hide show
  1. .dataset_viewer.yml +5 -0
  2. .gitattributes +2 -1
  3. LICENSE +21 -0
  4. README.md +1194 -0
  5. dataset_infos.json +43 -0
.dataset_viewer.yml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ viewer: false
2
+ configs:
3
+ - config_name: default
4
+ data_files: "data.tzst"
5
+ description: "Full FSL Product Classification dataset with 1000 classes and ~1M images"
.gitattributes CHANGED
@@ -57,4 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
- data.tzst filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ # Archive files - compressed
61
+ *.tzst filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Xi Xu
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,3 +1,1197 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - image-classification
5
+ task_ids:
6
+ - multi-class-image-classification
7
+ - few-shot-image-classification
8
+ tags:
9
+ - computer-vision
10
+ - product-classification
11
+ - e-commerce
12
+ - retail
13
+ - few-shot-learning
14
+ - meta-learning
15
+ - benchmark
16
+ size_categories:
17
+ - 100K<n<1M
18
+ language:
19
+ - en
20
+ pretty_name: FSL Product Classification Dataset
21
+ configs:
22
+ - config_name: default
23
+ data_files: "data.tzst"
24
+ default: true
25
+ dataset_info:
26
+ features:
27
+ - name: image
28
+ dtype: image
29
+ - name: label
30
+ dtype: int64
31
+ - name: class_name
32
+ dtype: string
33
+ - name: image_id
34
+ dtype: string
35
+ splits:
36
+ - name: train
37
+ num_bytes: 9945644054
38
+ num_examples: 279747
39
+ download_size: 9945644054
40
+ dataset_size: 9945644054
41
  ---
42
+
43
+ # Few-Shot Learning (FSL) Product Classification Dataset
44
+
45
+ ## Dataset Description
46
+
47
+ This dataset is designed for **Few-Shot Learning (FSL)** research in product classification tasks. It contains product images organized into 763 distinct classes, with an average of approximately 367 images per class (279,747 total images), making it ideal for training and evaluating few-shot learning algorithms in e-commerce and retail scenarios. Note that class numbers are not continuous.
48
+
49
+ ### Key Features
50
+
51
+ - **763 product classes** covering diverse product categories
52
+ - **279,747 total images** (average of ~367 images per class)
53
+ - **High-quality product images** suitable for computer vision research
54
+ - **Variable class distribution** with non-continuous class numbers
55
+ - **Efficient TZST compression** for reduced storage and faster transfer
56
+ - **Compatible with Hugging Face Datasets** library
57
+
58
+ ### Dataset Statistics
59
+
60
+ - **Total Classes**: 763
61
+ - **Total Images**: 279,747
62
+ - **Images per Class**: ~367 (average, variable distribution)
63
+ - **Class Numbers**: Non-continuous (some class numbers may be missing)
64
+ - **Image Format**: PNG
65
+ - **Typical Image Size**: 50-100 KB per image
66
+ - **Average Images per Class**: 366.6 (279,747 ÷ 763)
67
+ - **Compressed Archive Size**: ~9.9 GB (data.tzst)
68
+ - **Extraction Requirements**: ~10 GB additional space
69
+
70
+ ## Dataset Structure
71
+
72
+ The dataset is stored in a compressed tzst archive ([`data.tzst`](data.tzst)) with the following structure:
73
+
74
+ ```text
75
+ data.tzst
76
+ ├── class_0/
77
+ │ ├── class_0_0.png
78
+ │ ├── class_0_1.png
79
+ │ └── ...
80
+ ├── class_1/
81
+ │ ├── class_1_0.png
82
+ │ ├── class_1_1.png
83
+ │ └── ...
84
+ └── ... (763 total classes with non-continuous numbers)
85
+ ```
86
+
87
+ **Note**: Class numbers are not continuous. For example, you might have class_0, class_2, class_5, etc., but not class_1, class_3, class_4. The total number of classes is 763.
88
+
89
+ ## Usage
90
+
91
+ ## Installation and Setup
92
+
93
+ ### Quick Start Installation
94
+
95
+ ```bash
96
+ # Create a new virtual environment (recommended)
97
+ python -m venv fsl-env
98
+ source fsl-env/bin/activate # On Windows: fsl-env\Scripts\activate
99
+
100
+ # Install core dependencies
101
+ pip install datasets tzst pillow
102
+
103
+ # Install additional dependencies for machine learning
104
+ pip install torch torchvision numpy scikit-learn matplotlib seaborn tqdm
105
+
106
+ # For Jupyter notebook users
107
+ pip install jupyter ipywidgets
108
+ ```
109
+
110
+ ### Complete Requirements
111
+
112
+ Create a `requirements.txt` file with the following dependencies:
113
+
114
+ ```text
115
+ # Core dependencies
116
+ datasets>=2.14.0
117
+ tzst>=1.2.8
118
+ pillow>=9.0.0
119
+
120
+ # Machine learning
121
+ torch>=1.9.0
122
+ torchvision>=0.10.0
123
+ numpy>=1.21.0
124
+ scikit-learn>=1.0.0
125
+
126
+ # Data analysis and visualization
127
+ pandas>=1.3.0
128
+ matplotlib>=3.4.0
129
+ seaborn>=0.11.0
130
+
131
+ # Progress bars and utilities
132
+ tqdm>=4.62.0
133
+ pathlib>=1.0.1
134
+
135
+ # Optional: for advanced few-shot learning
136
+ learn2learn>=0.1.7
137
+ higher>=0.2.1
138
+
139
+ # Optional: for notebook usage
140
+ jupyter>=1.0.0
141
+ ipywidgets>=7.6.0
142
+ ```
143
+
144
+ Install all requirements:
145
+
146
+ ```bash
147
+ pip install -r requirements.txt
148
+ ```
149
+
150
+ ### Docker Setup (Optional)
151
+
152
+ For a containerized environment:
153
+
154
+ ```dockerfile
155
+ FROM python:3.9-slim
156
+
157
+ WORKDIR /app
158
+
159
+ # Install system dependencies
160
+ RUN apt-get update && apt-get install -y \
161
+ git \
162
+ wget \
163
+ && rm -rf /var/lib/apt/lists/*
164
+
165
+ # Copy requirements and install Python dependencies
166
+ COPY requirements.txt .
167
+ RUN pip install --no-cache-dir -r requirements.txt
168
+
169
+ # Copy application code
170
+ COPY . .
171
+
172
+ # Set environment variables
173
+ ENV PYTHONPATH=/app
174
+ ENV HF_DATASETS_CACHE=/app/cache
175
+
176
+ # Create cache directory
177
+ RUN mkdir -p /app/cache
178
+
179
+ CMD ["python", "-c", "print('FSL Product Classification environment ready!')"]
180
+ ```
181
+
182
+ Build and run:
183
+
184
+ ```bash
185
+ docker build -t fsl-product-classification .
186
+ docker run -it --rm -v $(pwd)/data:/app/data fsl-product-classification bash
187
+ ```
188
+
189
+ ### Loading the Dataset
190
+
191
+ #### Option 1: Using Hugging Face Datasets (Recommended)
192
+
193
+ ```python
194
+ from datasets import load_dataset
195
+
196
+ # Load the complete dataset
197
+ dataset = load_dataset("xixu-me/fsl-product-classification")
198
+
199
+ # Access the training split
200
+ train_dataset = dataset["train"]
201
+
202
+ # Print dataset information
203
+ print(f"Dataset size: {len(train_dataset)}")
204
+ print(f"Features: {train_dataset.features}")
205
+
206
+ # Access individual samples
207
+ sample = train_dataset[0]
208
+ print(f"Image ID: {sample['image_id']}")
209
+ print(f"Class: {sample['class_name']} (Label: {sample['label']})")
210
+ print(f"Image: {sample['image']}") # PIL Image object
211
+ ```
212
+
213
+ #### Option 2: Manual Extraction and Loading
214
+
215
+ ```python
216
+ import os
217
+ from tzst import extract_archive
218
+ from datasets import Dataset, Features, Value, Image, ClassLabel
219
+ from PIL import Image as PILImage
220
+
221
+ # Extract the dataset archive
222
+ extract_archive("data.tzst", "extracted_data/")
223
+
224
+ # Create a custom dataset loader
225
+ def load_fsl_dataset(data_dir="extracted_data"):
226
+ samples = []
227
+ class_names = []
228
+
229
+ # Scan for class directories
230
+ for class_dir in sorted(os.listdir(data_dir)):
231
+ if class_dir.startswith("class_"):
232
+ class_path = os.path.join(data_dir, class_dir)
233
+ if os.path.isdir(class_path):
234
+ class_id = int(class_dir.split("_")[1])
235
+ class_names.append(class_dir)
236
+
237
+ # Load images from this class
238
+ for img_file in os.listdir(class_path):
239
+ if img_file.endswith('.png'):
240
+ img_path = os.path.join(class_path, img_file)
241
+ image_id = img_file.replace('.png', '')
242
+
243
+ samples.append({
244
+ 'image': img_path,
245
+ 'label': class_id,
246
+ 'class_name': class_dir,
247
+ 'image_id': image_id
248
+ })
249
+
250
+ # Create features definition
251
+ features = Features({
252
+ 'image': Image(),
253
+ 'label': Value('int64'),
254
+ 'class_name': Value('string'),
255
+ 'image_id': Value('string')
256
+ })
257
+
258
+ # Create dataset
259
+ return Dataset.from_list(samples, features=features)
260
+
261
+ # Load the dataset
262
+ dataset = load_fsl_dataset()
263
+ print(f"Loaded {len(dataset)} samples from {len(set(dataset['class_name']))} classes")
264
+ ```
265
+
266
+ #### Option 3: Streaming Mode for Large Archives
267
+
268
+ For memory-efficient processing of the large archive:
269
+
270
+ ```python
271
+ from tzst import extract_archive
272
+ import tempfile
273
+ import os
274
+
275
+ # Use streaming extraction for memory efficiency
276
+ with tempfile.TemporaryDirectory() as temp_dir:
277
+ # Extract with streaming mode
278
+ extract_archive("data.tzst", temp_dir, streaming=True)
279
+
280
+ # Process extracted data
281
+ dataset = load_fsl_dataset(temp_dir)
282
+ # ... your processing code here
283
+ ```
284
+
285
+ ### Data Exploration
286
+
287
+ ```python
288
+ from collections import Counter
289
+ import matplotlib.pyplot as plt
290
+
291
+ # Load dataset
292
+ dataset = load_dataset("xixu-me/fsl-product-classification")["train"]
293
+
294
+ # Analyze class distribution
295
+ class_counts = Counter(dataset['class_name'])
296
+ print(f"Number of classes: {len(class_counts)}")
297
+ print(f"Average images per class: {len(dataset) / len(class_counts):.1f}")
298
+
299
+ # Plot class distribution (top 20 classes)
300
+ top_classes = class_counts.most_common(20)
301
+ classes, counts = zip(*top_classes)
302
+
303
+ plt.figure(figsize=(12, 6))
304
+ plt.bar(range(len(classes)), counts)
305
+ plt.xlabel('Class')
306
+ plt.ylabel('Number of Images')
307
+ plt.title('Top 20 Classes by Image Count')
308
+ plt.xticks(range(len(classes)), [c.replace('class_', '') for c in classes], rotation=45)
309
+ plt.tight_layout()
310
+ plt.show()
311
+
312
+ # Display sample images
313
+ import random
314
+
315
+ def show_samples(dataset, num_samples=8):
316
+ """Display random samples from the dataset"""
317
+ indices = random.sample(range(len(dataset)), num_samples)
318
+
319
+ fig, axes = plt.subplots(2, 4, figsize=(15, 8))
320
+ axes = axes.flatten()
321
+
322
+ for i, idx in enumerate(indices):
323
+ sample = dataset[idx]
324
+ axes[i].imshow(sample['image'])
325
+ axes[i].set_title(f"{sample['class_name']}\nID: {sample['image_id']}")
326
+ axes[i].axis('off')
327
+
328
+ plt.tight_layout()
329
+ plt.show()
330
+
331
+ # Show sample images
332
+ show_samples(dataset)
333
+ ```
334
+
335
+ ### Few-Shot Learning Setup
336
+
337
+ #### Basic Few-Shot Episode Creation
338
+
339
+ ```python
340
+ import random
341
+ from collections import defaultdict
342
+ import torch
343
+ from torch.utils.data import DataLoader
344
+ from datasets import load_dataset
345
+
346
+ def create_few_shot_split(dataset, n_way=5, k_shot=5, n_query=15, seed=None):
347
+ """
348
+ Create a few-shot learning episode
349
+
350
+ Args:
351
+ dataset: Hugging Face Dataset instance or custom dataset
352
+ n_way: Number of classes in the episode
353
+ k_shot: Number of support samples per class
354
+ n_query: Number of query samples per class
355
+ seed: Random seed for reproducibility
356
+
357
+ Returns:
358
+ support_set, query_set: Lists of (image, label) tuples
359
+ """
360
+ if seed is not None:
361
+ random.seed(seed)
362
+
363
+ # Group samples by class
364
+ class_samples = defaultdict(list)
365
+ for i, sample in enumerate(dataset):
366
+ class_samples[sample['label']].append(i)
367
+
368
+ # Filter classes with enough samples
369
+ valid_classes = [
370
+ class_id for class_id, indices in class_samples.items()
371
+ if len(indices) >= k_shot + n_query
372
+ ]
373
+
374
+ if len(valid_classes) < n_way:
375
+ raise ValueError(f"Not enough classes with {k_shot + n_query} samples. "
376
+ f"Found {len(valid_classes)}, need {n_way}")
377
+
378
+ # Sample n_way classes
379
+ episode_classes = random.sample(valid_classes, n_way)
380
+
381
+ support_set = []
382
+ query_set = []
383
+
384
+ for new_label, original_class in enumerate(episode_classes):
385
+ class_indices = random.sample(class_samples[original_class], k_shot + n_query)
386
+
387
+ # Support samples
388
+ for idx in class_indices[:k_shot]:
389
+ sample = dataset[idx]
390
+ support_set.append((sample['image'], new_label, sample['image_id']))
391
+
392
+ # Query samples
393
+ for idx in class_indices[k_shot:]:
394
+ sample = dataset[idx]
395
+ query_set.append((sample['image'], new_label, sample['image_id']))
396
+
397
+ return support_set, query_set
398
+
399
+ # Example usage
400
+ dataset = load_dataset("xixu-me/fsl-product-classification")["train"]
401
+
402
+ # Create a 5-way 5-shot episode
403
+ support_set, query_set = create_few_shot_split(dataset, n_way=5, k_shot=5, n_query=15)
404
+
405
+ print(f"Support set: {len(support_set)} samples")
406
+ print(f"Query set: {len(query_set)} samples")
407
+ ```
408
+
409
+ #### Advanced FSL Dataset Class
410
+
411
+ ```python
412
+ import torch
413
+ from torch.utils.data import Dataset
414
+ from torchvision import transforms
415
+ from PIL import Image
416
+ import numpy as np
417
+
418
+ class FSLProductDataset(Dataset):
419
+ """
420
+ Few-Shot Learning Dataset wrapper for product classification
421
+ """
422
+
423
+ def __init__(self, hf_dataset, transform=None, target_transform=None):
424
+ self.dataset = hf_dataset
425
+ self.transform = transform or self.get_default_transform()
426
+ self.target_transform = target_transform
427
+
428
+ # Create label mapping for non-continuous labels
429
+ unique_labels = sorted(set(hf_dataset['label']))
430
+ self.label_to_idx = {label: idx for idx, label in enumerate(unique_labels)}
431
+ self.idx_to_label = {idx: label for label, idx in self.label_to_idx.items()}
432
+
433
+ def get_default_transform(self):
434
+ """Default image transformations"""
435
+ return transforms.Compose([
436
+ transforms.Resize((224, 224)),
437
+ transforms.ToTensor(),
438
+ transforms.Normalize(mean=[0.485, 0.456, 0.406],
439
+ std=[0.229, 0.224, 0.225])
440
+ ])
441
+
442
+ def __len__(self):
443
+ return len(self.dataset)
444
+
445
+ def __getitem__(self, idx):
446
+ sample = self.dataset[idx]
447
+ image = sample['image']
448
+
449
+ # Convert to PIL Image if needed
450
+ if not isinstance(image, Image.Image):
451
+ image = Image.fromarray(image)
452
+
453
+ # Apply transforms
454
+ if self.transform:
455
+ image = self.transform(image)
456
+
457
+ # Map label to continuous indices
458
+ label = self.label_to_idx[sample['label']]
459
+
460
+ if self.target_transform:
461
+ label = self.target_transform(label)
462
+
463
+ return image, label, sample['image_id']
464
+
465
+ def get_class_samples(self, class_label):
466
+ """Get all samples for a specific class"""
467
+ indices = [i for i, sample in enumerate(self.dataset)
468
+ if sample['label'] == class_label]
469
+ return [self[i] for i in indices]
470
+
471
+ def create_episode_dataloader(self, n_way=5, k_shot=5, n_query=15,
472
+ batch_size=None, shuffle=True):
473
+ """Create a DataLoader for a few-shot episode"""
474
+ support_set, query_set = create_few_shot_split(
475
+ self.dataset, n_way=n_way, k_shot=k_shot, n_query=n_query
476
+ )
477
+
478
+ # Convert to tensors
479
+ support_images = []
480
+ support_labels = []
481
+ query_images = []
482
+ query_labels = []
483
+
484
+ for image, label, _ in support_set:
485
+ if isinstance(image, Image.Image):
486
+ image = self.transform(image) if self.transform else image
487
+ support_images.append(image)
488
+ support_labels.append(label)
489
+
490
+ for image, label, _ in query_set:
491
+ if isinstance(image, Image.Image):
492
+ image = self.transform(image) if self.transform else image
493
+ query_images.append(image)
494
+ query_labels.append(label)
495
+
496
+ support_data = (torch.stack(support_images), torch.tensor(support_labels))
497
+ query_data = (torch.stack(query_images), torch.tensor(query_labels))
498
+
499
+ return support_data, query_data
500
+
501
+ # Example usage with PyTorch
502
+ transform = transforms.Compose([
503
+ transforms.Resize((84, 84)), # Common size for few-shot learning
504
+ transforms.ToTensor(),
505
+ transforms.Normalize(mean=[0.485, 0.456, 0.406],
506
+ std=[0.229, 0.224, 0.225])
507
+ ])
508
+
509
+ # Load dataset
510
+ hf_dataset = load_dataset("xixu-me/fsl-product-classification")["train"]
511
+ fsl_dataset = FSLProductDataset(hf_dataset, transform=transform)
512
+
513
+ # Create episode data
514
+ support_data, query_data = fsl_dataset.create_episode_dataloader(
515
+ n_way=5, k_shot=1, n_query=15
516
+ )
517
+
518
+ print(f"Support images shape: {support_data[0].shape}")
519
+ print(f"Support labels shape: {support_data[1].shape}")
520
+ print(f"Query images shape: {query_data[0].shape}")
521
+ print(f"Query labels shape: {query_data[1].shape}")
522
+ ```
523
+
524
+ #### Meta-Learning Training Loop
525
+
526
+ ```python
527
+ import torch.nn as nn
528
+ import torch.optim as optim
529
+ from tqdm import tqdm
530
+
531
+ def train_fsl_model(model, dataset, num_episodes=1000, n_way=5, k_shot=1,
532
+ n_query=15, lr=0.001, device='cuda'):
533
+ """
534
+ Basic training loop for few-shot learning
535
+
536
+ Args:
537
+ model: Few-shot learning model (e.g., Prototypical Network)
538
+ dataset: FSLProductDataset instance
539
+ num_episodes: Number of training episodes
540
+ n_way, k_shot, n_query: Episode configuration
541
+ lr: Learning rate
542
+ device: Training device
543
+ """
544
+ model.to(device)
545
+ optimizer = optim.Adam(model.parameters(), lr=lr)
546
+ criterion = nn.CrossEntropyLoss()
547
+
548
+ model.train()
549
+ total_loss = 0
550
+ total_acc = 0
551
+
552
+ for episode in tqdm(range(num_episodes), desc="Training"):
553
+ # Create episode
554
+ support_data, query_data = dataset.create_episode_dataloader(
555
+ n_way=n_way, k_shot=k_shot, n_query=n_query
556
+ )
557
+
558
+ support_images, support_labels = support_data
559
+ query_images, query_labels = query_data
560
+
561
+ # Move to device
562
+ support_images = support_images.to(device)
563
+ support_labels = support_labels.to(device)
564
+ query_images = query_images.to(device)
565
+ query_labels = query_labels.to(device)
566
+
567
+ # Forward pass
568
+ optimizer.zero_grad()
569
+ logits = model(support_images, support_labels, query_images)
570
+ loss = criterion(logits, query_labels)
571
+
572
+ # Backward pass
573
+ loss.backward()
574
+ optimizer.step()
575
+
576
+ # Calculate accuracy
577
+ pred = logits.argmax(dim=1)
578
+ acc = (pred == query_labels).float().mean()
579
+
580
+ total_loss += loss.item()
581
+ total_acc += acc.item()
582
+
583
+ if (episode + 1) % 100 == 0:
584
+ avg_loss = total_loss / 100
585
+ avg_acc = total_acc / 100
586
+ print(f"Episode {episode + 1}: Loss = {avg_loss:.4f}, Acc = {avg_acc:.4f}")
587
+ total_loss = 0
588
+ total_acc = 0
589
+
590
+ # Example: Simple Prototypical Network
591
+ class SimplePrototypicalNetwork(nn.Module):
592
+ def __init__(self, backbone):
593
+ super().__init__()
594
+ self.backbone = backbone
595
+
596
+ def forward(self, support_images, support_labels, query_images):
597
+ # Encode images
598
+ support_features = self.backbone(support_images)
599
+ query_features = self.backbone(query_images)
600
+
601
+ # Calculate prototypes
602
+ n_way = len(torch.unique(support_labels))
603
+ prototypes = []
604
+
605
+ for class_idx in range(n_way):
606
+ class_mask = support_labels == class_idx
607
+ class_features = support_features[class_mask]
608
+ prototype = class_features.mean(dim=0)
609
+ prototypes.append(prototype)
610
+
611
+ prototypes = torch.stack(prototypes)
612
+
613
+ # Calculate distances and logits
614
+ distances = torch.cdist(query_features, prototypes)
615
+ logits = -distances # Negative distance as logits
616
+
617
+ return logits
618
+ ```
619
+
620
+ ## Research Applications
621
+
622
+ This dataset is particularly well-suited for:
623
+
624
+ ### Few-Shot Learning
625
+
626
+ - **Meta-learning algorithms** (MAML, Prototypical Networks, Relation Networks)
627
+ - **Metric learning approaches** (Siamese Networks, Triplet Networks)
628
+ - **Gradient-based meta-learning** methods
629
+
630
+ ### Transfer Learning
631
+
632
+ - **Pre-training** on large-scale product data
633
+ - **Domain adaptation** from general images to products
634
+ - **Fine-tuning** strategies for product classification
635
+
636
+ ### Computer Vision Research
637
+
638
+ - **Product recognition** and retrieval
639
+ - **E-commerce applications**
640
+ - **Retail automation**
641
+ - **Visual search** systems
642
+
643
+ ## Benchmark Tasks
644
+
645
+ ### Standard Few-Shot Learning Evaluation
646
+
647
+ The following benchmarks are recommended for evaluating few-shot learning models on this dataset:
648
+
649
+ #### Standard Evaluation Protocol
650
+
651
+ ```python
652
+ import numpy as np
653
+ from sklearn.metrics import accuracy_score, classification_report
654
+ import json
655
+
656
+ def evaluate_fsl_model(model, dataset, num_episodes=600, n_way=5, k_shot=1,
657
+ n_query=15, device='cuda'):
658
+ """
659
+ Evaluate few-shot learning model using standard protocol
660
+
661
+ Returns:
662
+ dict: Evaluation results with mean accuracy and confidence interval
663
+ """
664
+ model.eval()
665
+ accuracies = []
666
+
667
+ with torch.no_grad():
668
+ for _ in tqdm(range(num_episodes), desc="Evaluating"):
669
+ # Create episode
670
+ support_data, query_data = dataset.create_episode_dataloader(
671
+ n_way=n_way, k_shot=k_shot, n_query=n_query
672
+ )
673
+
674
+ support_images, support_labels = support_data
675
+ query_images, query_labels = query_data
676
+
677
+ # Move to device
678
+ support_images = support_images.to(device)
679
+ support_labels = support_labels.to(device)
680
+ query_images = query_images.to(device)
681
+ query_labels = query_labels.to(device)
682
+
683
+ # Predict
684
+ logits = model(support_images, support_labels, query_images)
685
+ pred = logits.argmax(dim=1)
686
+
687
+ # Calculate episode accuracy
688
+ acc = (pred == query_labels).float().mean().item()
689
+ accuracies.append(acc)
690
+
691
+ # Calculate statistics
692
+ mean_acc = np.mean(accuracies)
693
+ std_acc = np.std(accuracies)
694
+ ci_95 = 1.96 * std_acc / np.sqrt(len(accuracies))
695
+
696
+ results = {
697
+ 'mean_accuracy': mean_acc,
698
+ 'std_accuracy': std_acc,
699
+ 'confidence_interval_95': ci_95,
700
+ 'num_episodes': num_episodes,
701
+ 'config': f"{n_way}-way {k_shot}-shot"
702
+ }
703
+
704
+ return results
705
+
706
+ # Benchmark configurations
707
+ benchmark_configs = [
708
+ {'n_way': 5, 'k_shot': 1, 'n_query': 15}, # 5-way 1-shot
709
+ {'n_way': 5, 'k_shot': 5, 'n_query': 15}, # 5-way 5-shot
710
+ {'n_way': 10, 'k_shot': 1, 'n_query': 15}, # 10-way 1-shot
711
+ {'n_way': 10, 'k_shot': 5, 'n_query': 15}, # 10-way 5-shot
712
+ ]
713
+
714
+ # Run benchmarks
715
+ def run_benchmark_suite(model, dataset, num_episodes=600):
716
+ """Run complete benchmark suite"""
717
+ results = {}
718
+
719
+ for config in benchmark_configs:
720
+ config_name = f"{config['n_way']}-way_{config['k_shot']}-shot"
721
+ print(f"\nEvaluating {config_name}...")
722
+
723
+ result = evaluate_fsl_model(
724
+ model, dataset, num_episodes=num_episodes, **config
725
+ )
726
+ results[config_name] = result
727
+
728
+ print(f"Accuracy: {result['mean_accuracy']:.4f} ± {result['confidence_interval_95']:.4f}")
729
+
730
+ return results
731
+
732
+ # Example usage
733
+ # results = run_benchmark_suite(model, test_dataset)
734
+ ```
735
+
736
+ #### Cross-Domain Evaluation
737
+
738
+ ```python
739
+ def create_cross_domain_split(dataset, train_ratio=0.6, val_ratio=0.2, test_ratio=0.2, seed=42):
740
+ """
741
+ Create train/validation/test splits at the class level for cross-domain evaluation
742
+
743
+ Args:
744
+ dataset: Hugging Face Dataset
745
+ train_ratio: Proportion of classes for training
746
+ val_ratio: Proportion of classes for validation
747
+ test_ratio: Proportion of classes for testing
748
+ seed: Random seed
749
+
750
+ Returns:
751
+ dict: Splits with class indices for each set
752
+ """
753
+ np.random.seed(seed)
754
+
755
+ # Get unique classes
756
+ unique_classes = sorted(set(dataset['label']))
757
+ n_classes = len(unique_classes)
758
+
759
+ # Calculate split sizes
760
+ n_train = int(n_classes * train_ratio)
761
+ n_val = int(n_classes * val_ratio)
762
+ n_test = n_classes - n_train - n_val
763
+
764
+ # Shuffle and split classes
765
+ shuffled_classes = np.random.permutation(unique_classes)
766
+ train_classes = shuffled_classes[:n_train]
767
+ val_classes = shuffled_classes[n_train:n_train + n_val]
768
+ test_classes = shuffled_classes[n_train + n_val:]
769
+
770
+ # Create sample indices for each split
771
+ train_indices = [i for i, sample in enumerate(dataset) if sample['label'] in train_classes]
772
+ val_indices = [i for i, sample in enumerate(dataset) if sample['label'] in val_classes]
773
+ test_indices = [i for i, sample in enumerate(dataset) if sample['label'] in test_classes]
774
+
775
+ return {
776
+ 'train': {'indices': train_indices, 'classes': train_classes.tolist()},
777
+ 'validation': {'indices': val_indices, 'classes': val_classes.tolist()},
778
+ 'test': {'indices': test_indices, 'classes': test_classes.tolist()}
779
+ }
780
+
781
+ # Create cross-domain splits
782
+ dataset = load_dataset("xixu-me/fsl-product-classification")["train"]
783
+ splits = create_cross_domain_split(dataset)
784
+
785
+ print(f"Train classes: {len(splits['train']['classes'])}")
786
+ print(f"Validation classes: {len(splits['validation']['classes'])}")
787
+ print(f"Test classes: {len(splits['test']['classes'])}")
788
+ ```
789
+
790
+ ### Performance Baselines
791
+
792
+ Expected performance ranges for different few-shot learning approaches:
793
+
794
+ | Method | 5-way 1-shot | 5-way 5-shot | 10-way 1-shot | 10-way 5-shot |
795
+ |--------|--------------|--------------|----------------|----------------|
796
+ | Random Baseline | 20.0% | 20.0% | 10.0% | 10.0% |
797
+ | Nearest Neighbor | 35-45% | 55-65% | 25-35% | 45-55% |
798
+ | Prototypical Networks | 45-55% | 65-75% | 35-45% | 55-65% |
799
+ | MAML | 48-58% | 68-78% | 38-48% | 58-68% |
800
+ | Relation Networks | 50-60% | 70-80% | 40-50% | 60-70% |
801
+
802
+ ### Utility Functions
803
+
804
+ ```python
805
+ import os
806
+ import json
807
+ from pathlib import Path
808
+ import matplotlib.pyplot as plt
809
+ import seaborn as sns
810
+ from collections import Counter
811
+
812
+ def dataset_statistics(dataset):
813
+ """
814
+ Generate comprehensive statistics about the dataset
815
+
816
+ Args:
817
+ dataset: Hugging Face Dataset or list of samples
818
+
819
+ Returns:
820
+ dict: Dataset statistics
821
+ """
822
+ if hasattr(dataset, '__getitem__') and hasattr(dataset, '__len__'):
823
+ # Hugging Face Dataset
824
+ labels = dataset['label']
825
+ class_names = dataset['class_name']
826
+ image_ids = dataset['image_id']
827
+ else:
828
+ # List of samples
829
+ labels = [sample['label'] for sample in dataset]
830
+ class_names = [sample['class_name'] for sample in dataset]
831
+ image_ids = [sample['image_id'] for sample in dataset]
832
+
833
+ # Basic statistics
834
+ n_samples = len(labels)
835
+ n_classes = len(set(labels))
836
+ class_counts = Counter(labels)
837
+
838
+ # Calculate distribution statistics
839
+ counts = list(class_counts.values())
840
+ stats = {
841
+ 'total_samples': n_samples,
842
+ 'total_classes': n_classes,
843
+ 'avg_samples_per_class': n_samples / n_classes,
844
+ 'min_samples_per_class': min(counts),
845
+ 'max_samples_per_class': max(counts),
846
+ 'std_samples_per_class': np.std(counts),
847
+ 'class_distribution': dict(class_counts)
848
+ }
849
+
850
+ return stats
851
+
852
+ def plot_class_distribution(dataset, top_k=50, figsize=(15, 8)):
853
+ """
854
+ Plot class distribution
855
+
856
+ Args:
857
+ dataset: Dataset object
858
+ top_k: Number of top classes to show
859
+ figsize: Figure size
860
+ """
861
+ # Get class counts
862
+ if hasattr(dataset, '__getitem__'):
863
+ class_counts = Counter(dataset['label'])
864
+ else:
865
+ class_counts = Counter([sample['label'] for sample in dataset])
866
+
867
+ # Get top k classes
868
+ top_classes = class_counts.most_common(top_k)
869
+ labels, counts = zip(*top_classes)
870
+
871
+ # Plot
872
+ plt.figure(figsize=figsize)
873
+ bars = plt.bar(range(len(labels)), counts)
874
+ plt.xlabel('Class ID')
875
+ plt.ylabel('Number of Samples')
876
+ plt.title(f'Class Distribution (Top {top_k} Classes)')
877
+ plt.xticks(range(0, len(labels), max(1, len(labels)//10)),
878
+ [str(l) for l in labels[::max(1, len(labels)//10)]], rotation=45)
879
+
880
+ # Add statistics text
881
+ total_samples = sum(counts)
882
+ avg_samples = total_samples / len(counts)
883
+ plt.text(0.02, 0.98, f'Total Classes: {len(class_counts)}\n'
884
+ f'Shown Classes: {len(labels)}\n'
885
+ f'Avg Samples/Class: {avg_samples:.1f}',
886
+ transform=plt.gca().transAxes, verticalalignment='top',
887
+ bbox=dict(boxstyle='round', facecolor='wheat', alpha=0.5))
888
+
889
+ plt.tight_layout()
890
+ plt.show()
891
+
892
+ return top_classes
893
+
894
+ def save_dataset_info(dataset, output_path="dataset_info.json"):
895
+ """
896
+ Save dataset information to JSON file
897
+
898
+ Args:
899
+ dataset: Dataset object
900
+ output_path: Path to save the info file
901
+ """
902
+ stats = dataset_statistics(dataset)
903
+
904
+ # Add additional metadata
905
+ info = {
906
+ 'dataset_name': 'FSL Product Classification Dataset',
907
+ 'version': '1.0',
908
+ 'statistics': stats,
909
+ 'description': 'Few-shot learning dataset for product classification',
910
+ 'features': {
911
+ 'image': 'PIL Image object',
912
+ 'label': 'Class ID (int64)',
913
+ 'class_name': 'Class name string',
914
+ 'image_id': 'Unique image identifier'
915
+ }
916
+ }
917
+
918
+ # Save to file
919
+ with open(output_path, 'w') as f:
920
+ json.dump(info, f, indent=2)
921
+
922
+ print(f"Dataset info saved to: {output_path}")
923
+ return info
924
+
925
+ def verify_dataset_integrity(dataset_path="data.tzst"):
926
+ """
927
+ Verify dataset archive integrity
928
+
929
+ Args:
930
+ dataset_path: Path to the dataset archive
931
+
932
+ Returns:
933
+ bool: True if dataset is valid
934
+ """
935
+ from tzst import test_archive
936
+
937
+ try:
938
+ # Test archive integrity
939
+ is_valid = test_archive(dataset_path)
940
+
941
+ if is_valid:
942
+ print(f"✅ Dataset archive '{dataset_path}' is valid")
943
+
944
+ # Get archive info
945
+ from tzst import list_archive
946
+ contents = list_archive(dataset_path, verbose=True)
947
+
948
+ print(f"📁 Archive contains {len(contents)} files")
949
+
950
+ # Check for expected structure
951
+ class_dirs = [item['name'] for item in contents
952
+ if item['name'].startswith('class_') and item['name'].endswith('/')]
953
+ print(f"🏷️ Found {len(class_dirs)} class directories")
954
+
955
+ return True
956
+ else:
957
+ print(f"❌ Dataset archive '{dataset_path}' is corrupted")
958
+ return False
959
+
960
+ except Exception as e:
961
+ print(f"❌ Error verifying dataset: {e}")
962
+ return False
963
+
964
+ def create_data_splits(dataset, split_ratios={'train': 0.8, 'test': 0.2},
965
+ strategy='random', seed=42):
966
+ """
967
+ Create train/test splits from the dataset
968
+
969
+ Args:
970
+ dataset: Dataset object
971
+ split_ratios: Dictionary with split names and ratios
972
+ strategy: 'random' or 'stratified'
973
+ seed: Random seed
974
+
975
+ Returns:
976
+ dict: Split datasets
977
+ """
978
+ from sklearn.model_selection import train_test_split
979
+
980
+ np.random.seed(seed)
981
+
982
+ if strategy == 'random':
983
+ # Simple random split
984
+ indices = list(range(len(dataset)))
985
+ train_size = split_ratios.get('train', 0.8)
986
+
987
+ train_indices, test_indices = train_test_split(
988
+ indices, train_size=train_size, random_state=seed
989
+ )
990
+
991
+ splits = {
992
+ 'train': dataset.select(train_indices),
993
+ 'test': dataset.select(test_indices)
994
+ }
995
+
996
+ elif strategy == 'stratified':
997
+ # Stratified split maintaining class distribution
998
+ labels = dataset['label']
999
+ indices = list(range(len(dataset)))
1000
+ train_size = split_ratios.get('train', 0.8)
1001
+
1002
+ train_indices, test_indices = train_test_split(
1003
+ indices, train_size=train_size, stratify=labels, random_state=seed
1004
+ )
1005
+
1006
+ splits = {
1007
+ 'train': dataset.select(train_indices),
1008
+ 'test': dataset.select(test_indices)
1009
+ }
1010
+
1011
+ # Print split information
1012
+ for split_name, split_dataset in splits.items():
1013
+ n_samples = len(split_dataset)
1014
+ n_classes = len(set(split_dataset['label']))
1015
+ print(f"{split_name.capitalize()} split: {n_samples} samples, {n_classes} classes")
1016
+
1017
+ return splits
1018
+
1019
+ # Example usage of utility functions
1020
+ def analyze_dataset(dataset_path="data.tzst"):
1021
+ """
1022
+ Complete dataset analysis workflow
1023
+ """
1024
+ print("🔍 Analyzing FSL Product Classification Dataset")
1025
+ print("=" * 50)
1026
+
1027
+ # 1. Verify dataset integrity
1028
+ print("\n1. Verifying dataset integrity...")
1029
+ is_valid = verify_dataset_integrity(dataset_path)
1030
+
1031
+ if not is_valid:
1032
+ return
1033
+
1034
+ # 2. Load dataset
1035
+ print("\n2. Loading dataset...")
1036
+ try:
1037
+ dataset = load_dataset("xixu-me/fsl-product-classification")["train"]
1038
+ print(f"✅ Successfully loaded dataset with {len(dataset)} samples")
1039
+ except Exception as e:
1040
+ print(f"❌ Error loading dataset: {e}")
1041
+ return
1042
+
1043
+ # 3. Generate statistics
1044
+ print("\n3. Generating statistics...")
1045
+ stats = dataset_statistics(dataset)
1046
+
1047
+ print(f"📊 Dataset Statistics:")
1048
+ print(f" Total samples: {stats['total_samples']:,}")
1049
+ print(f" Total classes: {stats['total_classes']:,}")
1050
+ print(f" Avg samples per class: {stats['avg_samples_per_class']:.1f}")
1051
+ print(f" Min samples per class: {stats['min_samples_per_class']}")
1052
+ print(f" Max samples per class: {stats['max_samples_per_class']}")
1053
+ print(f" Std samples per class: {stats['std_samples_per_class']:.1f}")
1054
+
1055
+ # 4. Plot distributions
1056
+ print("\n4. Plotting class distribution...")
1057
+ plot_class_distribution(dataset, top_k=30)
1058
+
1059
+ # 5. Save dataset info
1060
+ print("\n5. Saving dataset information...")
1061
+ save_dataset_info(dataset)
1062
+
1063
+ # 6. Create splits
1064
+ print("\n6. Creating data splits...")
1065
+ splits = create_data_splits(dataset, strategy='stratified')
1066
+
1067
+ print("\n✅ Dataset analysis complete!")
1068
+ return dataset, stats, splits
1069
+
1070
+ # Run analysis
1071
+ # dataset, stats, splits = analyze_dataset()
1072
+ ```
1073
+
1074
+ ## Troubleshooting
1075
+
1076
+ ### Common Issues and Solutions
1077
+
1078
+ #### 1. Archive Extraction Issues
1079
+
1080
+ **Problem**: Error extracting `data.tzst` file
1081
+
1082
+ ```text
1083
+ TzstDecompressionError: Failed to decompress archive
1084
+ ```
1085
+
1086
+ **Solution**:
1087
+
1088
+ ```python
1089
+ # Verify archive integrity first
1090
+ from tzst import test_archive
1091
+ if not test_archive("data.tzst"):
1092
+ print("Archive is corrupted. Please re-download.")
1093
+
1094
+ # Use streaming mode for large archives
1095
+ from tzst import extract_archive
1096
+ extract_archive("data.tzst", "output/", streaming=True)
1097
+ ```
1098
+
1099
+ #### 2. Memory Issues with Large Dataset
1100
+
1101
+ **Problem**: Out of memory when loading the full dataset
1102
+
1103
+ **Solution**:
1104
+
1105
+ ```python
1106
+ # Use streaming dataset
1107
+ from datasets import load_dataset
1108
+ dataset = load_dataset("xixu-me/fsl-product-classification", streaming=True)
1109
+
1110
+ # Or load in chunks
1111
+ def load_dataset_chunked(chunk_size=1000):
1112
+ dataset = load_dataset("xixu-me/fsl-product-classification")["train"]
1113
+ for i in range(0, len(dataset), chunk_size):
1114
+ chunk = dataset.select(range(i, min(i + chunk_size, len(dataset))))
1115
+ yield chunk
1116
+ ```
1117
+
1118
+ #### 3. Non-continuous Class Labels
1119
+
1120
+ **Problem**: Class labels are not continuous (0, 1, 2, ...)
1121
+
1122
+ **Solution**:
1123
+
1124
+ ```python
1125
+ # Create label mapping
1126
+ unique_labels = sorted(set(dataset['label']))
1127
+ label_to_idx = {label: idx for idx, label in enumerate(unique_labels)}
1128
+
1129
+ # Apply mapping
1130
+ def map_labels(example):
1131
+ example['mapped_label'] = label_to_idx[example['label']]
1132
+ return example
1133
+
1134
+ dataset = dataset.map(map_labels)
1135
+ ```
1136
+
1137
+ #### 4. CUDA/GPU Issues
1138
+
1139
+ **Problem**: CUDA out of memory during training
1140
+
1141
+ **Solution**:
1142
+
1143
+ ```python
1144
+ # Reduce batch size or use CPU
1145
+ device = torch.device('cpu') # Force CPU usage
1146
+
1147
+ # Or use gradient accumulation
1148
+ accumulation_steps = 4
1149
+ for i, (support_data, query_data) in enumerate(dataloader):
1150
+ loss = model(support_data, query_data) / accumulation_steps
1151
+ loss.backward()
1152
+
1153
+ if (i + 1) % accumulation_steps == 0:
1154
+ optimizer.step()
1155
+ optimizer.zero_grad()
1156
+ ```
1157
+
1158
+ ### Performance Tips
1159
+
1160
+ 1. **Use appropriate image sizes**: For few-shot learning, 84x84 or 224x224 are common choices
1161
+ 2. **Enable streaming mode**: For memory-efficient processing of large archives
1162
+ 3. **Use data augmentation**: Improve few-shot performance with transforms
1163
+ 4. **Cache preprocessed data**: Save processed episodes to disk for faster iteration
1164
+
1165
+ ## Citation
1166
+
1167
+ If you use this dataset in your research, please cite:
1168
+
1169
+ ```bibtex
1170
+ @dataset{fsl_product_classification_2025,
1171
+ title={Few-Shot Learning Product Classification Dataset},
1172
+ author={Xi Xu},
1173
+ year={2025},
1174
+ publisher={Hugging Face},
1175
+ url={https://huggingface.co/datasets/xixu-me/fsl-product-classification}
1176
+ }
1177
+ ```
1178
+
1179
+ ## License
1180
+
1181
+ This dataset is released under the MIT License. See the [LICENSE file](LICENSE) for details.
1182
+
1183
+ ## Data Ethics and Responsible Use
1184
+
1185
+ This dataset is intended for academic research and educational purposes in few-shot learning and computer vision. Users should:
1186
+
1187
+ - **Respect intellectual property**: Images may be subject to copyright; use only for research purposes
1188
+ - **Consider bias**: Be aware that product categories may reflect certain demographic or geographic biases
1189
+ - **Commercial use**: While the license permits it, consider the ethical implications of commercial applications
1190
+ - **Attribution**: Please cite this dataset in any published work
1191
+
1192
+ ## Limitations
1193
+
1194
+ - **Image quality**: Variable image quality and backgrounds may affect model performance
1195
+ - **Class imbalance**: Some classes may have significantly fewer images than others
1196
+ - **Non-continuous labels**: Class numbers are not sequential, which may require label mapping
1197
+ - **Temporal bias**: Product images reflect trends from the time of collection
dataset_infos.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "default": {
3
+ "citation": "@dataset{fsl_product_classification_2024,\n title={Few-Shot Learning Product Classification Dataset},\n author={},\n year={2024},\n publisher={Hugging Face},\n url={https://huggingface.co/datasets/your-username/fsl-product-classification}\n}",
4
+ "dataset_size": 15000000000,
5
+ "description": "Few-Shot Learning Product Classification Dataset containing 1000 product classes with approximately 1000 images per class. This dataset is designed for few-shot learning research in product classification tasks, covering diverse e-commerce and retail scenarios.",
6
+ "download_checksums": {
7
+ "data.tzst": {
8
+ "checksum": null,
9
+ "num_bytes": 15000000000
10
+ }
11
+ },
12
+ "download_size": 15000000000,
13
+ "features": {
14
+ "class_name": {
15
+ "_type": "Value",
16
+ "dtype": "string"
17
+ },
18
+ "image": {
19
+ "_type": "Image",
20
+ "dtype": "image"
21
+ },
22
+ "image_id": {
23
+ "_type": "Value",
24
+ "dtype": "string"
25
+ },
26
+ "label": {
27
+ "_type": "Value",
28
+ "dtype": "int64"
29
+ }
30
+ },
31
+ "homepage": "https://huggingface.co/datasets/your-username/fsl-product-classification",
32
+ "license": "MIT",
33
+ "size_in_bytes": 30000000000,
34
+ "splits": {
35
+ "train": {
36
+ "dataset_name": "fsl_product_classification",
37
+ "name": "train",
38
+ "num_bytes": 15000000000,
39
+ "num_examples": 1000000
40
+ }
41
+ }
42
+ }
43
+ }