File size: 4,828 Bytes
f60bd9f 4f3ddfe f60bd9f 4f3ddfe f60bd9f 4f3ddfe 16cff4f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 |
---
dataset_info:
features:
- name: id
dtype: int64
- name: query_id
dtype: int64
- name: product_id
dtype: int64
- name: label
dtype:
class_label:
names:
'0': Irrelevant
'1': Partial
'2': Exact
- name: query
dtype: string
- name: query_class
dtype: string
- name: product_name
dtype: string
- name: product_class
dtype: string
- name: category hierarchy
dtype: string
- name: product_description
dtype: string
- name: product_features
dtype: string
- name: rating_count
dtype: float64
- name: average_rating
dtype: float64
- name: review_count
dtype: float64
splits:
- name: train
num_bytes: 331042200.4486481
num_examples: 140068
- name: dev
num_bytes: 110348975.77567595
num_examples: 46690
- name: test
num_bytes: 110348975.77567595
num_examples: 46690
download_size: 212373125
dataset_size: 551740152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
license: mit
task_categories:
- sentence-similarity
- text-classification
language:
- en
size_categories:
- 100K<n<1M
---
# WANDS - Wayfair ANnotation Dataset: Dataset for product search relevance assessment
- Original source of the data is: https://github.com/wayfair/WANDS
- Train, dev, test split of 3:1:1 as per footnote 5 in https://arxiv.org/abs/2307.00370
## Details
* 42,994 candidate products
* 480 queries
* 233,448 (query,product) relevance judgements
## Column details
* Product columns:
* product_id - ID of a product
* product_name - String of product name
* product_class - Category which product falls under
* category_hierarchy - Parent categories of product, delimited by ```/```
* product_description - String description of product
* product_features - ```|``` delimited string of attribute:value pairs which describe the product
* rating_count - Number of user ratings for product
* average_rating - Average rating the product received
* review_count - Number of user reviews for product
* Search queries columns:
* query_id - unique ID for each query
* query - query string
* query_class - category to which the query falls under
* Annotated (product,relevance judgement) pairs, columns:
* id - Unique ID for each annotation
* label - Relevance label, one of 'Exact', 'Partial', or 'Irrelevant'
# Citation
Please cite this paper if you are building on top of or using this dataset:
```text
@InProceedings{wands,
title = {WANDS: Dataset for Product Search Relevance Assessment},
author = {Chen, Yan and Liu, Shujian and Liu, Zheng and Sun, Weiyi and Baltrunas, Linas and Schroeder, Benjamin},
booktitle = {Proceedings of the 44th European Conference on Information Retrieval},
year = {2022},
numpages = {12}
}
```
# Code for generating dataset
```python
import pandas as pd
from datasets import Dataset
from datasets import DatasetDict, Dataset
from datasets import ClassLabel, load_from_disk, load_dataset, concatenate_datasets
from pathlib import Path
base_path = "https://github.com/wayfair/WANDS/raw/main/dataset/"
query_df = pd.read_csv(f"{base_path}/query.csv", sep='\t')
product_df = pd.read_csv(f"{base_path}/product.csv", sep='\t')
label_df = pd.read_csv(f"{base_path}/label.csv", sep='\t')
df_dataset = label_df.merge(
query_df, on="query_id"
).merge(
product_df, on="product_id"
)
wands_class_label_feature = ClassLabel(num_classes=3, names=["Irrelevant", "Partial", "Exact"])
dataset = dataset.train_test_split(test_size=2/5, seed=1337)
dev_test_dataset = dataset["test"].train_test_split(test_size=1/2, seed=1337)
dataset = DatasetDict(
train=dataset["train"],
dev=dev_test_dataset["train"],
test=dev_test_dataset["test"],
)
"""
DatasetDict({
train: Dataset({
features: ['id', 'query_id', 'product_id', 'label', 'query', 'query_class', 'product_name', 'product_class', 'category hierarchy', 'product_description', 'product_features', 'rating_count', 'average_rating', 'review_count'],
num_rows: 140068
})
dev: Dataset({
features: ['id', 'query_id', 'product_id', 'label', 'query', 'query_class', 'product_name', 'product_class', 'category hierarchy', 'product_description', 'product_features', 'rating_count', 'average_rating', 'review_count'],
num_rows: 46690
})
test: Dataset({
features: ['id', 'query_id', 'product_id', 'label', 'query', 'query_class', 'product_name', 'product_class', 'category hierarchy', 'product_description', 'product_features', 'rating_count', 'average_rating', 'review_count'],
num_rows: 46690
})
})
"""
dataset.push_to_hub("napsternxg/wands")
``` |