metadata
license: mit
task_categories:
- text-classification
language:
- en
tags:
- amazon
- products
- binary
- text
pretty_name: Amazon Scrape 4 llm
size_categories:
- n<1K
configs:
- config_name: phones
data_files:
- split: train
path: data/phones.csv
- config_name: laptops
data_files:
- split: train
path: data/laptops.csv
Amazon Scrape 4 llm
Purpose?
Feed an LLM raw html to identify products from an ecommerce platform.
These datasets contain the extracted innerTexts of all HTML nodes from different ecommerce product pages.
The cleaning process significantly reduces the token size from ex: 450k -> 6k
Quickstart
from datasets import load_dataset
data_train = load_dataset("timashan/amazon-scrape-4-llm", "phones")
data_test = load_dataset("timashan/amazon-scrape-4-llm", "laptops")
JS snippet used for cleansing
const allowedTags = sanitizeHtml.defaults.allowedTags;
allowedTags.splice(allowedTags.indexOf("a"), 1);
convert(sanitizeHtml(document.body.innerHTML, { allowedTags }));