c-seo-bench / README.md
haritzpuerto's picture
Update README.md
87fc060 verified
metadata
dataset_info:
  features:
    - name: query_id
      dtype: int64
    - name: query
      dtype: string
    - name: document
      dtype: string
  splits:
    - name: retail
      num_bytes: 16261464
      num_examples: 5000
    - name: videogames
      num_bytes: 7786542
      num_examples: 4360
    - name: books
      num_bytes: 2858945
      num_examples: 2245
    - name: news
      num_bytes: 11619385
      num_examples: 2375
    - name: web
      num_bytes: 17871918
      num_examples: 1500
    - name: debate
      num_bytes: 10085407
      num_examples: 880
  download_size: 33921309
  dataset_size: 66483661
configs:
  - config_name: default
    data_files:
      - split: retail
        path: data/retail-*
      - split: videogames
        path: data/videogames-*
      - split: books
        path: data/books-*
      - split: news
        path: data/news-*
      - split: web
        path: data/web-*
      - split: debate
        path: data/debate-*
language:
  - en
license: apache-2.0
tags:
  - SEO
  - CSEO
  - RAG
  - conversational-search-engine

Dataset Summary

C-SEO Bench is a benchmark designed to evaluate conversational search engine optimization (C-SEO) techniques across two common tasks: product recommendation and question answering. Each task spans multiple domains to assess domain-specific effects and generalization ability of C-SEO methods.

Supported Tasks and Domains

Product Recommendation

This task requires an LLM to recommend the top-k products relevant to a user query, using only the content of 10 retrieved product descriptions. The task simulates a cold-start setting with no user profile. Domains:

  • Retail: Queries and product descriptions from Amazon.
  • Video Games: Search tags and game descriptions from Steam.
  • Books: GPT-generated queries with book synopsis from the Google Books API.

Question Answering

This task involves answering queries based on multiple passages. Domains:

  • Web Questions: Real search engine queries with retrieved web content.
  • News: GPT-generated questions over sets of related news articles.
  • Debate: Opinionated queries requiring multi-perspective evidence.

Total: Over 1.9k queries and 16k documents across six domains.

For more information about the dataset construction, please refer to the original publication.

Developed at Parameter Lab with the support of Naver AI Lab.

Disclaimer

This repository contains experimental software results and is published for the sole purpose of giving additional background details on the respective publication.

Citation

If this work is useful for you, please consider citing it

TODO

✉️ Contact person: Haritz Puerto, [email protected]

🏢 https://www.parameterlab.de/

Don't hesitate to send us an e-mail or report an issue if something is broken (and it shouldn't be) or if you have further questions.