Datasets:
File size: 1,657 Bytes
396a369 dd04a41 396a369 d4a45a0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
language:
- en
dataset_info:
features:
- name: id
dtype: int64
- name: query
dtype: string
- name: product_title
dtype: string
- name: product_description
dtype: string
- name: median_relevance
dtype: float64
- name: relevance_variance
dtype: float64
- name: split
dtype: string
splits:
- name: train
num_bytes: 5156813
num_examples: 10158
- name: test
num_bytes: 14636826
num_examples: 22513
download_size: 10796818
dataset_size: 19793639
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Crowdflower Search Results Relevance
* Original source: https://www.kaggle.com/c/crowdflower-search-relevance/overview
* More detailed version: https://data.world/crowdflower/ecommerce-search-relevance
## Citation
```
@misc{crowdflower-search-relevance,
author = {AaronZukoff, Anna Montoya, JustinTenuto, Wendy Kan},
title = {Crowdflower Search Results Relevance},
publisher = {Kaggle},
year = {2015},
url = {https://kaggle.com/competitions/crowdflower-search-relevance}
}
```
## Code for generating data
```python
# ! unzip train.csv.zip
# ! unzip test.csv.zip
df_comp = pd.concat([
pd.read_csv("./train.csv").assign(split="train"),
pd.read_csv("./test.csv").assign(split="test"),
])
dataset = DatasetDict(
train=Dataset.from_pandas(df_comp[df_comp["split"] == "train"].reset_index(drop=True)),
test=Dataset.from_pandas(df_comp[df_comp["split"] == "test"].reset_index(drop=True)),
)
dataset.push_to_hub("napsternxg/kaggle_crowdflower_ecommerce_search_relevance")
``` |